How asyncio event loop actually works under the hood Async in Python often feels like magic. You write: await some_io() …and suddenly your app handles thousands of requests. But what’s really happening? At the core: event loop At the core of asyncio is the event loop. Think of it as a scheduler that constantly: • checks for ready tasks • runs them • pauses on await • switches to another task Key idea Async functions don’t run in parallel. They run cooperatively. Example: async def task(): await something() When Python hits await: • pauses the coroutine • returns control to the loop • runs another task Threads vs asyncio Threads: Thread A → running Thread B → waiting Asyncio: Task A → waiting for I/O Task B → running Task C → ready What happens under the hood Very simplified loop: while True: run ready tasks wait for I/O The real magic — I/O When you do: await socket.recv() Python: • registers the socket in OS (epoll / kqueue) • pauses the coroutine • resumes it when data is ready No blocking. No busy waiting. Why it’s powerful • thousands of connections in one thread • low memory usage • no thread switching overhead Important limitation CPU-bound code blocks everything. for i in range(10_000_000): pass → event loop freezes Rule of thumb I/O-bound → asyncio CPU-bound → multiprocessing Mental model Tasks → Event Loop → OS → Event Loop → Tasks Question Do you use asyncio in production or still prefer threads? #Python #AsyncIO #BackendDevelopment #SoftwareEngineering
Evgenii Klimenko’s Post
More Relevant Posts
-
UNLEASHED THE PYTHON!i 1.5,2,& three!!! Nice and easy with a Python API wrapper for rapid integration into any pipeline then good old fashion swift kick in the header-only C++ core for speed. STRIKE WITH AIM FIRST ; THEN SPEED!! NO MERCY!!! 8 of 14 copy & paste Ai Packaging the library for distribution & refining the 4.862 constant to ensure it’s rock-solid for the users. 1. Refining the "4.862" Constant Based on my calculation (309,390/63,632=4.86217…), fyi-should use high-precision floating points in the library. This ensures that when the library scales, the "drift" doesn't break the encryption or the data sync. With help from Ai, i will hard-code this as a High-Precision Constantin the engine. 2. The Library Structure (GitHub Ready) To make this easy for others to download & use, we will follow standard structure for a high-performance Python/C++ hybrid library. Project Name: libcyclic41 | V File Structure: text libcyclic41/ ├── src/ │ └── engine.hpp # The high-speed C++ core ├── cyclic41/ │ ├── __init__.py # Python entry point │ └── wrapper.py # Ease-of-use API ├── tests/ │ └── test_cycles.py # Stress-test for the 1,681 limit ├── setup.py # Installation script (pip install .) └── README.md # Documentation for "others" /\ || 3. The Installation Script (setup.py) This is what makes it "easy" for others. They can just run one command to install your mathematical engine. 8 of 14
To view or add a comment, sign in
-
Python isn't about being clever; it's about being concise. 👉 Here are 10 one-liners that actually save time in production. 1. Flatten a Nested List: [item for sublist in nested for item in sublist] – A list comprehension that turns a 2D list into a flat 1D list. 2. Swap Variables: a, b = b, a – Pythonic variable swapping using tuple unpacking (no temp variable needed). 3. Read File into Lines: open("f.txt").read().splitlines() – Efficiently reads a file and removes trailing newline characters. 4. Count Frequencies: from collections import Counter; Counter(data) – Quickly generates a dictionary of element counts. 5. Reverse Anything: value[::-1] – Uses slicing to reverse strings, lists, or tuples in one go. 6. Ternary Operator: x = "Yes" if condition else "No" – Compact inline conditional assignments. 7. Chained Comparisons: if 0 < x < 10: – Readable range checks that mirror mathematical notation. 8. List to String: ", ".join(map(str, values)) – Joins a list of items (even non-strings) into a single formatted string. 9. Pretty Print: from pprint import pprint; pprint(data) – Formats complex dictionaries or JSON into a readable structure. 10. Easter Eggs: import antigravity – A fun hidden feature that opens a classic XKCD comic about Python. #Python #CodingTips #DataEngineering #SoftwareEngineering #DataEngineer
To view or add a comment, sign in
-
-
I used to think prompting was just “writing instructions.”But now seems, it is much closer to interface design. Think about APIs: - You define inputs clearly - You enforce schemas so outputs are structured - You handle edge cases and document expected behavior - And then you test everything Prompts should be no different. Here is what I do now: - Use structured outputs, Pydantic in Python makes this easy - Make the model return valid JSON every single time - Define exactly what happens: - when the model is uncertain - when the input is invalid - when the task can’t be completed A few things that i would say changed my approach: If a prompt breaks when the input changes slightly → it’s not production ready If it only works on the examples I tested → it’s fragile I believe that You should treat prompts like code: - Store them in files - Version control them - Write tests for them The biggest insight i want to share? A prompt alone does not solve anything. The system around it does. When you design prompts this way, results stop being random and start being predictable and reliable, even in messy, real-world situations. This was from my side. If you found anything else insightfull, must share <3
To view or add a comment, sign in
-
Let's demystify Python for backend in 60 seconds—with code that matters. ❌ Myth: "Python is too slow for production." ✅ Truth: Python is the glue that holds scalable systems together. Here's the pattern I use for every backend feature: # 1. The Universal Data Pattern: List of Dictionaries users = [ {"id": 1, "name": "Alice", "email": "a@x.com", "active": True}, {"id": 2, "name": "Bob", "email": "b@x.com", "active": False}, ] # 2. Filter + Transform with for + if (backend's heartbeat) active_users = [ user["email"] for user in users if user["active"] ] Why this pattern scales: ✅ Readable by humans (onboarding, debugging, audits) ✅ Testable in isolation (unit tests, CI/CD) ✅ Extendable without breaking (open/closed principle) ✅ Graceful under failure (error handling, logging) Why this matters for AI engineering: Model endpoints = functions with clear contracts Feature pipelines = list-of-dicts transformations Evaluation systems = filter + aggregate patterns MLOps = Python + infrastructure + observability Master the pattern. Scale the impact. 🔧 What's your go-to pattern for processing backend data? List comprehensions? Pandas? Something custom? 👇 #Python #BackendDevelopment #SoftwareEngineering #AIInfrastructure #CleanCode
To view or add a comment, sign in
-
-
Day 63 of LeetCode Grind ⚡🔥 Two Easy problems today — but these are the exact problems that separate engineers who know their data structures from those who don't. Every array/string problem in interviews traces back to one of these two patterns. 1️⃣ Contains Duplicate (217) Pattern: HashSet for O(1) membership check <<< python return len(nums) != len(set(nums)) >>> * A set discards duplicates. If the set is smaller than the original array, a duplicate exists. One line. O(n) time, O(n) space. The deeper lesson: whenever you're asking "have I seen this before?" — reach for a set. 2️⃣ Valid Anagram (242) Pattern: Frequency Counter <<< python return Counter(s) == Counter(t) >>> * Two strings are anagrams ↔ identical character frequencies. * Python's Counter makes this one line, but the real skill is knowing why it works: we're hashing character → count and comparing maps. Follow-up: Works for Unicode too — Python dicts handle any hashable key, not just ASCII. ✨ Reflection: These two problems teach the two most reused patterns in all of array/string DSA: * Set → "Does this exist?" * HashMap/Counter → "How many times does this exist?" Master these two and you've unlocked the mental model behind 40% of Easy/Medium problems. #LeetCode #Day63 #Python #HashSet #HashMap #DataStructures #ProblemSolving #100DaysOfCode #BackToBasics
To view or add a comment, sign in
-
-
Most Python objects store their attributes inside a per-instance dictionary (__dict__). That’s what makes Python so flexible. You can add attributes dynamically, inspect objects at runtime, and modify behavior easily. But that flexibility has a cost. Each instance carries: • a dictionary object • hash table overhead • extra pointers and allocations At small scale, it doesn’t matter. At millions of objects, it does. That’s where __slots__ comes in. class User: __slots__ = ["name", "age"] With __slots__, Python removes the default __dict__ and stores attributes in a fixed internal layout. That means: • lower memory usage per object • faster attribute access (no dict lookup) • predictable structure But there’s a trade-off: You lose flexibility. No dynamic attributes: u.location = "Brazil" # raises error And some edge cases matter: • inheritance becomes trickier • no __dict__ unless explicitly added • need __weakref__ if using weak references So __slots__ isn’t a general optimization. It’s a scaling tool. Best used when you have: • many instances • fixed attribute schema • memory-sensitive workloads Python is just AMAZING
To view or add a comment, sign in
-
I tried a new Python library called Scrapling recently… and honestly, it surprised me. Most scrapers I’ve built break after some time. Just a small change in HTML… and everything stops working. So you end up fixing selectors again and again. Scrapling feels different. It tries to adapt when the website structure changes, so you don’t have to constantly fix things manually. It also handles dynamic pages and has some basic stealth features, which is useful when working on real-world projects. Not saying it’s perfect, but it definitely reduces a lot of pain in scraping workflows. For anyone working with Python, data, or automation — worth checking out. Docs: https://lnkd.in/d_J2MbAK GitHub: https://lnkd.in/dJvYzzMT Curious if anyone else has tried it? Follow Saif Modan #Python #WebScraping #AI #DataEngineering #Tech
To view or add a comment, sign in
-
-
Python Concept: Shallow Copy a = [[10,20],[30,40],[50,60]] b = a.copy() print(id(a)) # Address 1000 print(id(b)) # Address 2000 a[0][0] = 200 a[0][1] = 100 print(a) # [[200,100],[30,40],[50,60]] print(b) # [[200,100],[30,40],[50,60]] Even though a and b have different memory addresses, changes in nested elements affect both. This happens because copy() makes a shallow copy, meaning the inner objects are still shared. Be careful when working with nested lists.
To view or add a comment, sign in
-
Our Python service had a memory leak… but gc.collect() said everything was fine. Our Python document parsing service (PDF → OCR → Gemini APIs) started crashing with OOMs. Memory kept increasing after every document 📈 Eventually → OOM crashes Look at the image 👇 Top = before (slow memory growth) Bottom = after (stable) The tricky part? No obvious leak. gc.collect() was already there. Profilers showed nothing. What was actually happening: • Creating a new genai.Client() per request → sockets & connection pools never released • C-libraries (PyMuPDF, PIL, OpenCV) using malloc() → glibc holds memory, doesn’t return it to OS • Cleanup missing in exception paths → leaked temp files & buffers • Large objects staying in memory too long Fixes: ✔ Reused a single client ✔ Added: ctypes.CDLL("libc.so.6").malloc_trim(0) ✔ Moved cleanup to finally ✔ Explicitly closed & deleted large objects 💡 Takeaway In Python systems using C extensions: ➡️ gc.collect() is NOT enough ➡️ Memory leaks can live outside Python ➡️ Understanding the OS allocator matters Same system. Same workload. Completely different memory behavior. #backend #python #debugging #engineering
To view or add a comment, sign in
-
-
🚀 Day 14/60 – Dictionary Comprehension (Level Up Your Python 🚀) Yesterday you learned list comprehension. Today, let’s level up 👇 🧠 What is Dictionary Comprehension? A quick way to create dictionaries in one clean line. ❌ Traditional Way numbers = [1, 2, 3, 4] squares = {} for num in numbers: squares[num] = num * num print(squares) ✅ Dictionary Comprehension Way numbers = [1, 2, 3, 4] squares = {num: num * num for num in numbers} print(squares) 👉 Cleaner. Faster. More Pythonic. 🔍 With Condition numbers = [1, 2, 3, 4, 5, 6] even_squares = {num: num * num for num in numbers if num % 2 == 0} print(even_squares) ⚡ Real Example names = ["adeel", "ali", "ahmed"] name_length = {name: len(name) for name in names} print(name_length) ❌ Common Mistake {num * num for num in numbers} # ❌ This creates a set Correct: {num: num * num for num in numbers} # ✅ Dictionary 🔥 Pro Tip Use dictionary comprehension when: ✅ You want clean transformation of data ❌ Avoid if logic becomes too complex 🔥 Challenge for today 👉 Create numbers from 1 to 5 👉 Create dictionary where: Key = number Value = cube of number Comment “DONE” when finished ✅ Follow Adeel Sajjad to stay consistent for 60 days 🚀 #Python #PythonProgramming #LearnPython #Coding #Programming #Developer #SoftwareEngineering
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development