Day 293: Python asyncio — Modern Python Concurrency ⏳ Why asyncio feels different Unlike threading or multiprocessing, asyncio is about cooperation, not parallelism. Tasks don’t run at the same time — they pause and resume intelligently while waiting. 👉 Simple async example import asyncio async def hello(): print("Hello") await asyncio.sleep(1) print("World") asyncio.run(hello()) The magic is await. It tells Python: “I’m waiting, let something else run.” 💡 Perfect for •Web scraping •API calls •Async servers •Non-blocking applications 🧠 Challenge Write an async program that downloads data from multiple URLs at the same time. #Python #AsyncIO #AsyncProgramming #ModernPython
Python asyncio: Modern Concurrency with asyncio
More Relevant Posts
-
🧠 Python Feature That Feels Smart: set() for Removing Duplicates Most people do this 👇 unique = [] for x in nums: if x not in unique: unique.append(x) Python says… one line 😎 ✅ Pythonic Way unique = list(set(nums)) 🧒 Simple Explanation Imagine sorting marbles 🟢🔵🟢🔴 A set keeps only one of each color. Duplicates? Gone ✨ 💡 Why This Matters ✔ Removes duplicates fast ✔ Cleaner code ✔ Very common in interviews ✔ Great for data cleaning ⚠️ Important Note set() does not keep order. If order matters 👇 unique = list(dict.fromkeys(nums)) 💻 Python has tools that replace 10 lines of code with 1. 💻 Knowing them is what separates writing code from writing good code 🐍✨ #Python #PythonProgramming #PythonTips #LearnPython #CodingTips #Programming #SoftwareDevelopment #DataCleaning #DeveloperCommunity #TechCareers #CodeSmart #100DaysOfCode
To view or add a comment, sign in
-
-
There is a one line trick to save about 50% RAM usage in Python. By default, Python objects are flexible but heavy. To have flexibility to store any attribute inside objects, every Python object carries a dictionary called __dict__. Each attribute of the object is mapped inside this underlying dictionary. This is why we can add new attributes to an object on the fly: user.new_attribute = "surprise!" That means building a backend system that creates millions of objects (users, transactions, events) will create millions of dictionaries too. But dictionaries are memory consuming because they use hash tables to stay fast. Solution: __slots__ If you know exactly which attributes your object will have, you can tell Python: reserve space just for these fields only. Impact - Memory: SlotsUser uses significantly less RAM (40% to 60%) Speed: Attribute access is slightly faster Strictness: You can’t add random attributes anymore Takeaway - When you don’t need that flexibility, __slots__ helps you keep things lean. I am trying to learn Python Internals in detail and will share my learnings. Do follow along and tell your experiences in comments. #Python #PythonInternals #SoftwareEngineering #BackendDevelopment
To view or add a comment, sign in
-
-
In C, an integer takes 4 bytes. In Python, the number 1 takes 28 bytes. Why? As python is a "Dynamically Typed" language, it needs to store more than just the value 1. It needs to store "metadata" about that value so the interpreter knows what to do with it. How? In the CPython source code, every single thing is a PyObject. So when we create x = 1, following structure is created with it: 1. ob_refcnt (8 bytes): The Reference Counter. It tracks how many variables point to this object. 2. ob_type (8 bytes): A pointer to the "Type Object" (telling Python the datatype). 3. ob_size (8 bytes): For variable-sized objects (like lists) 4. The Actual Value (4-8 bytes): The actual number 1 Understanding this overhead explains why Python is memory-intensive being a dynamically typed language. I am trying to learn Python Internals in detail and will share my learnings. Do follow along and tell your experiences in comments. #Python #PythonInternals #SoftwareEngineering #BackendDevelopment
To view or add a comment, sign in
-
-
Why creating a tuple is faster than creating a list in Python? TL;DR: Python doesn’t always create new objects, it uses cached ones! Creating an object is expensive, it needs to ask the OS for memory. To avoid this, CPython implementation uses FREE LISTS for immutable objects like Tuples. How does it work? 1. When you stop using a small tuple (up to 20 elements), Python doesn't delete it from RAM. 2. It moves it to a "Free List" (a specialised cache). 3. When you need a new tuple of that same size, Python just grabs an old one from the cache and renews it. Lists, however, are rarely recycled this way because of their dynamic nature makes them too complex to keep in a simple cache. Why this matters? -> In a a real-time game engine, or a data processing pipeline, you might be creating objects millions of times per second. -> The List Tax: Every time you use [a, b], you are potentially triggering a memory allocation request. -> The Tuple Win: Every time you use (a, b), you are likely just grabbing a "pre-warmed" slot from Python’s internal cache. I’m deep-diving into Python internals and performance. Do follow along and tell your experiences in comments. #Python #PythonInternals #SoftwareEngineering #BackendDevelopment
To view or add a comment, sign in
-
-
More on the Python tools I’ve been building lately: I recently finished two backend utilities that I’m really proud of. PyError Repo: https://lnkd.in/ekpQhw6i A structured error‑handling tool for Python. It intercepts exceptions, logs them to a text file, and prints a clean, readable error message to the console. You get the error type, file path, method, exact line, and even the timestamp — all formatted so you can actually understand what went wrong. PyCommand Repo: https://lnkd.in/eFurREpX A lightweight command‑routing system. You register your functions once, and then invoke them by index with parameters. Paired with PyError, it gives you fully automated error catching and logging for any command you run. Route your commands and boom — instant structure. And there’s more coming. A lot more. This whole experience made something very clear to me: I love Python, and I have a real affinity for it. I’ve only been learning Python since Monday, and I built all of this without using any AI assistance. If you’re curious, I’ve even posted timelapses of some of my earlier tools on YouTube: https://lnkd.in/ev_5FPvA
To view or add a comment, sign in
-
Why range(1,000,000) is cheap, but list(range(1,000,000)) is costly in Python? TL;DR: Iteration Protocol in Python needs to know only next item and not full list. The "Next Page" Rule Iteration in Python isn't about having a collection of items; it’s about knowing how to get the next item. Two special methods make this possible: 1. __iter__() → tells Python “I can be looped over” 2. __next__() → returns the next value, one at a time When there’s nothing left, StopIteration tells Python to stop the loop. Why this matters? When we use a list, we pay for all the memory upfront. When we use the Iteration Protocol, we only pay for one item at a time. This is called Lazy Evaluation. Takeaway - If the object represents a collection or a stream of data, implement __iter__ and __next__. It makes the code more memory-efficient and much more "Pythonic." I’m deep-diving into the Python protocols this week and will share my learnings. Do follow along and tell your experiences in comments. #Python #PythonInternals #SoftwareEngineering #BackendDevelopment
To view or add a comment, sign in
-
-
Day 37 – Understanding How Python Stores Data Today, I am continuing on hash tables in Python, which is what powers dictionaries (dict). In simple terms: Python uses a smart system to store and find data almost instantly, instead of searching line by line. That’s why dictionaries are fast and used everywhere — from logins to APIs to caching. Today, I didn’t just read about how Python dictionaries work — I built a simple hash table from scratch in VS Code. What I did: Created a basic HashTable class Used Python’s hash() function to decide where data should live Stored values in buckets (lists) to safely handle collisions Retrieved values using keys, just like a real Python dict Even tested collisions by inserting keys that land in the same bucket I learned: Why dictionary keys must not change What a hash is (Python’s way of knowing where to store data) Why this concept matters for building fast and scalable systems This might look small, but it’s one of the ideas behind efficient backend and full-stack development. Slow progress is still progress. Understanding beats rushing. Which of the terms or concepts used here sounds too scary and unusual for you? Let me know, let's learn together 😊 #Day37 #LearningInPublic #Python #DataStructures #BackendBasics #Consistency
To view or add a comment, sign in
-
https://lnkd.in/e73QC5-9 FST fst Version 0.2.5 Overview This module exists in order to facilitate quick and easy high level editing of Python source in the form of an AST tree while preserving formatting. It is meant to allow you to change Python code functionality while not having to deal with the details of: Operator precedence and parentheses Indentation and line continuations Commas, semicolons, and tuple edge cases Comments and docstrings Various Python version-specific syntax quirks Lots more...
To view or add a comment, sign in
-
TL;DR: Data-Driven SIMCA is now available as a Python package. Check all the details here: https://lnkd.in/ejWzeFz7 During the Christmas break, I worked on a paper where, among other methods, I used Variational Autoencoders and DD-SIMCA. Since Python is the natural choice for ANN-based modeling, I decided to implement the DD-SIMCA routines in Python as well, keeping everything in one place and producing plots with a consistent style. After that, I polished the code and released it as a separate package. The package provides the same functionality as our DD-SIMCA web application (https://mda.tools/ddsimca), which serves as both our baseline and frontrunner. You can obtain the same outcomes and plots, but directly in Python. The repository description contains all the necessary information, including a link to a demo notebook with a code example. The notebook is based on our open access DD-SIMCA tutorial paper (https://lnkd.in/gZjJkU_G).
To view or add a comment, sign in
-
Day 3 of Python. Writing code once. Using it everywhere. Today’s focus was functions and modules. This is where Python stopped feeling like a scripting language and started feeling like a system tool. What I worked on: Writing reusable functions Passing data through parameters Returning predictable outputs Organizing logic into modules The key realization: Repetition is a design problem. If the same logic appears in multiple places: Bugs multiply Fixes become risky Pipelines turn fragile Functions solve logic. Modules protect structure. This is how Python scales from notebooks to production: One function. One responsibility. Shared utilities across files. Clean imports instead of copy-paste. This mindset is critical before touching Pandas or building pipelines. Tomorrow: applying this structure to real datasets. If you work with Python: What was the first function you automated that saved you real time? #datawithanurag #dataxbootcamp
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development