Day 21/50: The Connection Pool That Slowly Died The Setup:- Application running smoothly. After 48 hours: "No more connections available in pool.". The Problem:- Connections leaked. Database connections never got closed, accumulating until pool exhausted. The Investigation:- -> Enabled connection pool logging: -> Found database queries never closing connections in finally blocks: ---- python # The culprit cursor = db.cursor() result = cursor.execute("SELECT * FROM users") # No cleanup! return result ---- -> After 48 hours of traffic, all 20 pool connections were open and abandoned. The Solution:- -> Used context managers for automatic cleanup: ---- python with db.connection.cursor() as cursor: result = cursor.execute("SELECT * FROM users") return result # Connection auto-closes on exit ---- Or explicitly close in finally: ---- python try: cursor = db.cursor() result = cursor.execute("SELECT * FROM users") finally: cursor.close() # Always runs ---- The Lesson:- -> Connection leaks are silent killers. Always use context managers or try-finally blocks. -> Monitor pool metrics regularly—don't wait for crashes. ""Ever lost a pool to leaks? Tell your story."" #Day21 #50DaysOfDebugging #Python #Database #ConnectionPooling #SoftwareEngineering #Debugging #BestPractices
Saiteja Singirikonda’s Post
More Relevant Posts
-
Day 22/50: The Pagination That Tanked at Scale The Setup:- Report works fine with 10,000 rows. With 1M rows, query times out. The Problem:- Deep pagination with LIMIT/OFFSET. Query had to scan and discard 999,980 rows just to get to page 100,000. The Investigation:- Query looked innocent: --- python # Page 100,000 with 10 items per page SELECT * FROM orders ORDER BY id LIMIT 10 OFFSET 999990 Database scanned 999,990 rows unnecessarily before returning 10. --- The Solution:- Switched to keyset pagination (cursor-based): --- python # Instead of offset, track the last ID last_id = request.GET.get('last_id') results = Order.objects.filter(id__gt=last_id).order_by('id')[:10] Time dropped from 12 seconds to 0.2 seconds. ---- The Lesson:- LIMIT/OFFSET doesn't scale. Use cursor-based pagination with indexed lookups for large datasets. '''Have you been burned by deep pagination?''' #Day22 #50DaysOfDebugging #Database #Pagination #Performance #Scalability #SoftwareEngineering #Django #SQL
To view or add a comment, sign in
-
Ever wondered what __init__.py actually does? While setting up my RAG project, my folder looked like this. rag-retriever/ │ ├── src/ │ ├── __init__.py │ ├── data_loader.py │ ├── chunk_splitter.py │ └── semantic_split.py │ └── main.py At first, I kept getting import errors — until I understood the real role of __init__.py. It’s simple yet powerful: 1) It tells Python that “this folder is a package.” 2) It lets you import cleanly from one file to another. So now I can: Inside src/: use from .semantic_split import function_name Outside (in main.py): use from src import chunk_splitter That tiny file makes the entire structure modular and production-ready 🚀 --- In short: __init__.py is your folder’s identity card — it transforms random scripts into a real Python package 📦 #Python #LLM #RAG #MLOps #SoftwareEngineering #DataScience #CodingTips #LearningJourney
To view or add a comment, sign in
-
🥔 Hash Browns vs. 💻 Hash Tables Ever wondered if your breakfast has anything in common with high-performance code? It's not a coincidence. The word "hash" comes from the French hacher, which means "to chop up." 🥔 Hash Browns: You "chop up" potatoes. #️⃣ Hash Function: It "chops up" data into a unique ID (a hash code). This simple idea is the secret behind one of the most powerful data structures we use every day: the Hash Table. (You probably know it as dict in Python or HashMap in Java). Here’s the "kitchen" analogy: 1. A hash table is like a big cabinet with numbered drawers (buckets). 2. When you want to store an item (e.g., a user profile), you use a hash function (a "magic chopper") on the key (e.g., user_id). 3. The function instantly tells you which drawer to use (e.g., "Drawer #137"). 4. When you need that user again, you just hash the user_id, go straight to Drawer #137, and retrieve it. Instead of searching the whole cabinet (a slow O(n) operation), you get instant access. That's the magic of O(1) — constant time lookup. It's the engine that makes databases, caches, and modern programming languages incredibly fast. All thanks to an idea we borrowed from the kitchen. 🧑🍳 What's your favorite real-world analogy for a complex computer science concept? #DataStructures #Algorithms #Programming #Python #SoftwareEngineering #ComputerScience #TechExplained #Coding
To view or add a comment, sign in
-
-
🚀 DSA Progress – Day 104 ✅ Problem #49: Group Anagrams 🧠 Difficulty: Medium | Topics: Hash Table, String, Sorting, Frequency Count 🔍 Approach: Implemented a hashing + frequency count method to group all words that are anagrams of each other. Step 1 (Character Frequency): For every string in the input list, create a fixed-size array of 26 integers representing the frequency of each lowercase letter (a–z). Step 2 (Canonical Representation): Convert this frequency array into a canonical string that acts as a unique key for all words sharing the same letter composition. Example: "eat", "tea", and "ate" all map to "aet". Step 3 (Hash Map Grouping): Use a dictionary (defaultdict(list)) where: Key → the canonical string (like "aet") Value → list of words that match this signature. Step 4 (Result Construction): Collect all the dictionary values and return them as the grouped anagrams. 🕒 Time Complexity: O(n × k) n = number of words k = average length of each word (to count characters). 💾 Space Complexity: O(n × k) For storing all words and their frequency signatures. 📁 File: https://lnkd.in/g2MkYXJg 📚 Repo: https://lnkd.in/g8Cn-EwH 💡 Learned: This problem reinforced the power of hashing and canonical representation for grouping similar items. Instead of sorting each string (which costs O(k log k)), using a frequency count made the grouping faster and cleaner. It deepened my understanding of how to map variable-length strings into fixed-size signatures — a common trick in string hashing problems. ✅ Day 104 complete — grouped scattered words into perfect anagram families using hash magic! 🔤✨📚 #LeetCode #DSA #Python #HashMap #StringManipulation #Anagrams #FrequencyCount #100DaysOfCode #DailyCoding #InterviewPrep #GitHubJourney
To view or add a comment, sign in
-
Speed up your Frappe queries with frappe.get_cached_value() If you’re repeatedly fetching the same field value from the database, don’t use frappe.db.get_value() every time — it hits the database on each call. Instead, use frappe.get_cached_value() . It stores the result in memory (cache) and returns it faster on the next request. When to use it? Use frappe.get_cached_value() when :- Fetching non-changing fields like settings, configs, defaults You want better performance with fewer DB calls Accessing single fields from large DocTypes When NOT to use :- When the data changes frequently When you need fresh DB values every time When fetching multiple rows — use frappe.get_all() or get_list() instead Use caching smartly — small optimizations add up in big Frappe apps 💪 #Frappe #ERPNext #Backend #Performance #OpenSource #Python Rushabh MehtaHussain NagariaEjaaz KhanAditya HaseSherin K RRitvik SardanaManish DipankarFrappeefeone
To view or add a comment, sign in
-
-
The latest English Indices of Deprivation data was released last week! Link here -> https://lnkd.in/eud9sCHj Like with a lot of official statistics, the data is often scattered across multiple files and sheets. I know that a lot of data analysts, and data folk in general, will be busy wrangling away and starting to analyse the data. So... I wrote a super simple Python package to help speed up the process. Install the package. Run "iod load" in your terminal. Load the latest data into DuckDB in a minute or two and away you go! Link to the Python package is in the comments :)
To view or add a comment, sign in
-
-
🧩 Day 36 — Combination Sum (LeetCode 39) 📝 Problem -Given an array of distinct integers candidates and a target integer target, return a list of all unique combinations of candidates where the chosen numbers sum to target. -The same number may be chosen from candidates an unlimited number of times. -Two combinations are unique if the frequency of at least one number is different. 🔁 Approach -Use backtracking (DFS) to explore all possible combinations. -At each step, choose a number and subtract it from the remaining target. -If the remaining target becomes 0, store the current combination. -If it becomes negative, backtrack (stop exploring that path). -To avoid duplicates, always start the next recursive call from the current index or later. -This ensures combinations like [2,3] and [3,2] are treated as the same. 📊 Complexity -Time Complexity: O(2ⁿ) -Space Complexity: O(n) 🔑 Concepts Practiced -Backtracking and recursion -Depth-first search (DFS) -Combination generation with repetition allowed -Control of duplicate combinations using start index #Leetcode #DSA #Python #DFS #Search
To view or add a comment, sign in
-
-
At PyBay25, Guido van Rossum (yes, that Guido) demoed a new Python package for what he calls Structured RAG. Instead of the usual “throw everything into a vector store and hope for the best,” this approach uses an LLM during ingestion to extract structured data (entities, topics, verbs...) and store them in a standard database. When querying, it structures the prompt the same way. I’ve always believed that RAGs need this kind of preprocessing. It’s a bit naïve to think you can plug in every filetype under the sun, skip curation, and expect magic. This feels like a more unified push, and coming from a legend like Guido, that matters. It’s not perfect (nothing in RAGs ever is), but it sounds a lot more efficient once everything’s properly indexed. And seriously, has anyone out there achieved a perfect RAG in production yet? Asking for… all of us. 😅 https://lnkd.in/dg2X4Bsa
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development