𝗢𝗻𝗲 𝘀𝗺𝗮𝗹𝗹 𝗲𝗿𝗿𝗼𝗿 𝘁𝗮𝘂𝗴𝗵𝘁 𝗺𝗲 𝗮 𝗯𝗶𝗴 𝗹𝗲𝘀𝘀𝗼𝗻 𝗮𝗯𝗼𝘂𝘁 𝗱𝗲𝗯𝘂𝗴𝗴𝗶𝗻𝗴 🚀 Today while solving a SQL problem on LeetCode, I ran into this: TypeError: write() argument 1 must be unicode, not str At first glance, it looked like I had completely messed up. I rechecked my query. Tried different approaches. Still the same error. Then I realized the real issue… I had used ROWID — which works perfectly in Oracle. But LeetCode runs on MySQL, where ROWID doesn’t exist. And instead of a clear SQL error, it threw a Python error. That’s what made it confusing. That moment taught me something important: Not all errors mean your logic is wrong. Sometimes, you just need to understand the environment you’re working in. Debugging isn’t only about fixing code… It’s about thinking deeper and asking the right questions. Back to learning 🚀 #SQL #Debugging #LeetCode #BackendDevelopment #LearningJourney #ProblemSolving
Kishore Paraman’s Post
More Relevant Posts
-
I was loading CSV files into SQL Server. It was slow. Then I switched to BULK INSERT. 💥 Everything changed. BULK INSERT is a native SQL method. It is built for speed. But the real power comes when you combine it with Python. 𝗪𝗵𝗮𝘁 𝗱𝗶𝗱 𝗜 𝗱𝗼? ✔️ Python to handle multiple CSV files ✔️ Python to clean and normalize data ✔️ BULK INSERT for fast loading into SQL Server This combination is simple. And very powerful. Python manages flexibility. SQL manages performance. 𝗧𝗵𝗲 𝗿𝗲𝘀𝘂𝗹𝘁: a faster ingestion process a cleaner pipeline a more reliable system 𝗗𝗮𝘁𝗮 𝗶𝗻𝗴𝗲𝘀𝘁𝗶𝗼𝗻 𝗶𝘀 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗹𝗼𝗮𝗱𝗶𝗻𝗴. 𝗜𝘁’𝘀 𝗮𝗯𝗼𝘂𝘁 𝘀𝗽𝗲𝗲𝗱 𝗮𝗻𝗱 𝗰𝗼𝗻𝘁𝗿𝗼𝗹. Curious how it works? 🔗 GitHub repository: https://lnkd.in/dwjwP-bh P.S. I know BULK sounds a lot like “HULK”… not very original, but I like it 😄
To view or add a comment, sign in
-
-
Continuing from last week’s report, this week I learned about SQL and Git. I had some experience with databases before using Python, but mostly limited to CRUD. So at first, I thought SQL would be similar. Turns out, it’s much more than that. Here are some key things I learned this week: • Basic SQL (DDL, DML, queries like SELECT, WHERE, etc.) • Data processing using aggregate functions & GROUP BY • Joining multiple tables using JOIN • Version Control System (Git) and its workflow • Using GitHub for collaboration and tracking changes This made me realize that SQL is not just about CRUD, but about structuring and processing data more effectively. Still learning, but turns out it’s starting to make more sense why these fundamentals matter. If you’re interested, feel free to check out the slides I’ve shared. #DigitalSkola #LearningProgressReview #DataScience
To view or add a comment, sign in
-
50 orders. 51 database queries. That's what I found when I finally checked the query count on an endpoint I'd shipped two weeks earlier. Looked fine in local. Response times were normal — but I was testing on maybe 8 records. Real data hit it and the thing crawled. 4+ seconds for a simple order list. One JOIN. Done. The 4-second response dropped to under 80ms. But here's the thing — the broken code reads fine. There's nothing obviously wrong with it. You'd write it without blinking. I did. The ORM hides the cost so well that you only find out at the wrong moment. I've got django-debug-toolbar running locally now. Not optional anymore. For M2M or reverse FK relations it's prefetch_related — different mechanism, same idea. Worth knowing which to reach for before you need to. How are you catching N+1s before staging — toolbar, SQL logging, something else? #django #python #djangorestframework #backenddev #pythondev
To view or add a comment, sign in
-
-
🚀 Day 14 – SQL Challenge “Find students who know C”… Simple? Not when your data looks like this: 👉 C, Python, C++ Now the real question: How do you match only ‘C’ and ignore C++ / C#? I explored multiple approaches 👇 🔹 1. LIKE-based logic (pattern matching) Works, but gets messy with multiple conditions. 🔹 2. Split-based approach using SUBSTRING_INDEX Simulates splitting and gives more control: 🔹 3. FIND_IN_SET (simple & effective) Sometimes the simplest solution wins: WHERE FIND_IN_SET('C', skills) 🔥 Key Insight: There’s no single “best” solution in SQL. It’s about choosing between readability, scalability, and simplicity based on the situation. Which approach would you pick? 👇 Thanks for the suggestion, Ratan Kumar jha! 🙌 Tried a split-based approach as well using SUBSTRING_INDEX to simulate splitting in MySQL. Really helped make the logic cleaner and more structured. #SQL #DataAnalytics #SQLChallenge #ProblemSolving #LearningInPublic
To view or add a comment, sign in
-
-
I had 18,115 AWS API operation names in PascalCase that needed to become kebab-case. DescribeInstances to describe-instances. PutBucketAcl to put-bucket-acl. AWS's acronym casing is inconsistent across services, and I was not writing a custom Python converter for 18,000 edge cases. DuckDB has a community extension for this: INSTALL inflector FROM community; LOAD inflector; SELECT inflector_to_kebab_case('DescribeInstances'); -- describe-instances All 18,115 operations in one SQL pass. It also does snake_case, camelCase, train-case, pluralization, and bulk column renaming on structs. I used it to keep the raw PascalCase botocore contract in parquet and transform at query time — no slow Python string manipulation. https://lnkd.in/e8a_Aitd #duckdb #dataengineering #platformengineering #aws
To view or add a comment, sign in
-
🎯 Don’t Blame the Tool: Fix the Right Layer in SMTP & Report Delays Let’s suppose you encounter an SMTP-related issue and assume that your report server is slow or that Python can somehow improve the situation. Before jumping to conclusions or introducing new tools, it’s important to understand the hierarchy and flow of the process. Your requirement is simple: a procedure runs through a scheduled job and performs the following three steps: Running the report in the form of a PDF and saving it into a BLOB (runtime BLOB). Creating the HTML body and attaching the generated file. Sending the email. Now, where does Python actually fit in? For the third step, if the email server is not responding at the time of sending, Python can be effectively used to handle retries. Instead of letting the process fail, Python can keep attempting to send the email until it successfully reaches the recipients. This ensures reliability and reduces failure scenarios. The second step is not complex in this context—it executes quickly and does not usually introduce performance concerns. The first step, however, is the critical one. If your query or report generation is slow, Python will not solve that problem. The only solution is to optimize and fix the query itself. In short, use the right tool for the right problem: Use Python for handling retry logic in email delivery. Fix SQL/PLSQL performance issues at the source. #sql #plsql #NadirAli
To view or add a comment, sign in
-
Built **UwU DB 🐾** \> studying for my DBMS university exam \> textbook says "B+ Trees are the core of modern storage engines" \> think: "I should probably just memorize the time complexities" \> also think: "Nah, I'll just build the engine myself" \> write a custom B+ tree core in modern C++ \> glue it to a Python FastAPI backend using pybind11 \> viola, custom database. lol. Jokes aside, building this was an absolute masterclass in memory management. UwU DB features O(log n) CRUD operations, MVCC-style lazy deletions (tombstones) to prevent cascading tree-locks, and memory-safe C++ pointers so it doesn't leak RAM. Is it production-ready? No. Will it replace PostgreSQL? Absolutely not. Did I learn more doing this than reading the textbook? 100%. If you want to compile the ugly-ahh `.so` extension yourself and spin up the server, here it is https://lnkd.in/dvx6avw8
To view or add a comment, sign in
-
Day 10/30 — Social Network Analyzer (Python + MySQL) 🔹 Project Overview: Developed a Social Network Analyzer system using Python and MySQL to model user relationships, analyze connections, and recommend new links using graph-based algorithms. 🔹 Tools Used: Python | MySQL | Data Structures | Graph Algorithms | NetworkX | Matplotlib 🔹 Key Features: • Designed relational database to manage users and connections • Built graph structure to represent real-world relationships • Implemented BFS to find shortest connection paths • Identified mutual connections between users • Developed recommendation engine based on shared connections • Added network visualization for interactive analysis • Created CLI-based interface with clean and colored output 🔹 What I Learned: • Applying graph algorithms in real-world scenarios • Working with MySQL for structured data management • Building scalable backend logic using Python • Visualizing relationships using network graphs • Designing modular and maintainable code 🔗 GitHub Repository: https://lnkd.in/dpSCzhQG Would appreciate your feedback and suggestions 🙌 #30DaysOfCoding #PythonProjects #SQL #DataStructures #BackendDevelopment #LearningByDoing
To view or add a comment, sign in
-
SQLAlchemy 2.0 made querying simpler — but also more explicit. When working with Async SQLAlchemy in FastAPI, one important thing to understand is: 👉 Query building is synchronous 👉 Execution is asynchronous Most confusion comes from mixing these two. In this post, I’ve focused on the core query building patterns you’ll use daily: ✔️ select() — building the base query ✔️ where() — filtering data ✔️ join() — working with relationships And then executing them using: 👉 await session.execute() No unnecessary theory — just practical patterns that map closely to SQL. 📌 This is Part 1 of a series: Part 2 → Execution Layer Part 3 → Insert, update, delete + transactions If you're using FastAPI with PostgreSQL, this will make your ORM usage much clearer. 💬 Do you prefer explicit queries style like this, or more ORM abstraction like in the previous version? #sqlalchemy #fastapi #postgresql #python #backenddevelopment
To view or add a comment, sign in
-
I'm often asked how to handle edge cases when building data layers with MongoDB and Python. Simple CRUD is great, but real-world apps need robust query patterns and clean architecture. Working in VS Code on this project, I focused on layering logic. Instead of calling the database directly from the application layer, I used a modular service pattern (like user_service.py calling db_utils.py). A few key practices I implemented: ✅ Robust Error Handling: Ensuring a clean return for cases like invalid ObjectIds, which prevents app crashes. ✅ Modular Query Logic: Abstracting queries into specific, reusable functions (e.g., get_users_by_college) makes the main logic much easier to read and test. ✅ Automated Postman-Free Testing: In my terminal, you can see I'm using curl and echo to script a "Full CRUD Test Cycle." This is a fast, reproducible way to verify APIs during development. What's your go-to pattern for structuring database interactions in your applications? Do you stick with raw queries, ORMs, or custom data access objects? Let me know in the comments! GitHub link - > https://lnkd.in/dASzkj7T #mongodb #python #development #dataservices #vscode #backend #programming #softwareengineering
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development