🚀 Advanced Python Tips #6 — multiprocessing.Pool “Python is slow.” No. Your execution model might be. A for loop in Python is optimized in C and is surprisingly efficient. The real limitation isn’t iteration speed; it’s synchronous execution. If you're running CPU-bound tasks sequentially, you're leaving multiple CPU cores idle. Here’s the uncomfortable truth many developers gloss over: 👉 The GIL prevents true parallelism with threads for CPU-bound workloads. 👉 multiprocessing, however, does not share the GIL across processes. If your tasks are CPU-intensive and independent, multiprocessing.Pool enables real parallelism. With Pool: - Each process has its own Python interpreter - Each process has its own GIL - Work is distributed across multiple CPU cores - You get true parallel execution But here’s the part that rarely makes it into tutorials: ⚠️ multiprocessing has non-trivial overhead (process spawn + pickling) ⚠️ For lightweight tasks, it can be slower than a simple for loop ⚠️ For I/O-bound workloads, asyncio or threading may be more efficient Multiprocessing isn’t a magic performance switch. It’s a tool, and it only shines in the right context. It makes sense when your workload is: - CPU-bound - Independent - Heavy enough to amortize process overhead Otherwise, you’re just parallelizing the wrong bottleneck. Python isn’t slow. Misapplied parallelism is. Do you analyze whether your bottleneck is CPU-bound or I/O-bound before parallelizing?
Python multiprocessing for CPU-bound tasks
More Relevant Posts
-
Python typing may be entering a very different phase. PEP 827 is interesting not because it adds “more types”. It introduces ways to operate on types themselves. Today Python typing works well for describing structure. But once you try to express transformations, it becomes much harder. For example, we often know what a function returns: get_field(user, "username") clearly returns `str`. But expressing that cleanly and generically in Python typing is still difficult. This proposal moves in a different direction: • conditional types • type member access • variadic generics • type-level operators • type introspection In practice, this could change how frameworks and libraries are designed. Instead of duplicating models: User UserPublic UserCreate You could define transformations once and let the type system carry them forward. That matters beyond tooling. Type systems are not only about static checks. They shape what abstractions are easy to express. I have not gone deep into every detail yet. But directionally, this looks like one of the more interesting discussions around Python typing in recent years. Curious how others see this evolving.
To view or add a comment, sign in
-
**Python Is Not Dynamic. It Is Structurally Invariant.** Most discussions about Python focus on syntax, libraries, or productivity. That is surface. At the deepest level, Python is defined by a small set of invariants. 1. Everything is an object. 2. Every object has identity. 3. Names bind to objects, not values. 4. Evaluation is deterministic. 5. Execution is stack-based. In CPython, every object begins with the same structural header: reference count pointer to type There are no exceptions. int, list, function, class, metaclass. Uniform ontology. Names do not store data. They bind references inside namespaces resolved by LEGB rules. Rebinding does not mutate objects. It changes what a name points to. Execution is not “line by line.” It is bytecode interpreted by a stack machine: LOAD OPERATE STORE This invariant never changes. Even dynamic features -metaclasses, runtime modification, AST manipulation: operate inside the same object model, the same lookup rules, the same memory discipline. Python is dynamic at the surface. Structurally rigid at the core. That rigidity is what makes safe dynamism possible. Systems fail when invariants are implicit or undefined. Python works because its invariants are simple, consistent, and universal. Everything else is variation on top of that structure.
To view or add a comment, sign in
-
-
How Python Reads Your Code". It explains exactly what happens behind the scenes when Python encounters a simple line of code like r = 1 + 1: Step 1: Chopping It Up First, Python takes your line of code and breaks it down into individual, bite-sized pieces. For the equation r = 1 + 1, it specifically identifies r as the variable name, = as the assignment operator, the first 1 as the first operand, + as the addition operator, and the final 1 as the second operand. Step 2: Structuring & Trimming Once the pieces are separated, Python builds a blueprint by creating a structure that shows exactly how all of these parts connect together. To make sure it works as efficiently as possible, Python then "trims the fat" by removing any unnecessary complexity from this blueprint. Step 3: Analyzing the Ingredients Next, Python looks closely at each piece to identify its specific type—for example, recognizing that the number 1 is an "Integer". After figuring out exactly what kind of data it's holding, Python selects the correct operation (or "tool") needed to handle it. Step 4: The Final Cook Finally, Python executes the operation to process the code and produce your final result. The Process at a Glance The entire workflow boils down to four simple stages: 01 Chopping: Breaking the code into pieces. 02 Structuring: Building a logical blueprint. 03 Analyzing: Identifying data types and the right tools. 04 Executing: Running the code and producing the result.
To view or add a comment, sign in
-
99% of Python developers don't know about __slots__. 𝗕𝘂𝘁 𝘁𝗵𝗶𝘀 𝘀𝗶𝗻𝗴𝗹𝗲 𝗹𝗶𝗻𝗲 𝗰𝗮𝗻 𝗰𝘂𝘁 𝘆𝗼𝘂𝗿 𝗺𝗲𝗺𝗼𝗿𝘆 𝘂𝘀𝗮𝗴𝗲 𝗯𝘆 𝟰𝟬-𝟲𝟬%. Here's why this matters in ML/AI applications: 𝗪𝗶𝘁𝗵𝗼𝘂𝘁 __𝘀𝗹𝗼𝘁𝘀__: class DataPoint: def __init__(self, x, y, features): self.x = x self.y = y self.features = features 𝗘𝗮𝗰𝗵 𝗶𝗻𝘀𝘁𝗮𝗻𝗰𝗲 𝘀𝘁𝗼𝗿𝗲𝘀 𝗮𝘁𝘁𝗿𝗶𝗯𝘂𝘁𝗲𝘀 𝗶𝗻 𝗮 𝗱𝗶𝗰𝘁𝗶𝗼𝗻𝗮𝗿𝘆 → ~𝟮𝟴𝟬 𝗯𝘆𝘁𝗲𝘀 𝗽𝗲𝗿 𝗼𝗯𝗷𝗲𝗰𝘁 𝗪𝗶𝘁𝗵 __𝘀𝗹𝗼𝘁𝘀__: class DataPoint: __slots__ = ['x', 'y', 'features'] def __init__(self, x, y, features): self.x = x self.y = y self.features = features 𝗔𝘁𝘁𝗿𝗶𝗯𝘂𝘁𝗲𝘀 𝘀𝘁𝗼𝗿𝗲𝗱 𝗶𝗻 𝗳𝗶𝘅𝗲𝗱 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 → ~𝟭𝟮𝟬 𝗯𝘆𝘁𝗲𝘀 𝗽𝗲𝗿 𝗼𝗯𝗷𝗲𝗰𝘁 Real impact on ML workflows: • Training with 1M+ data points? Save ~160MB instantly • Faster attribute access (15-20% speed boost) • Cleaner memory profiling during model training 𝗧𝗵𝗲 𝗰𝗮𝘁𝗰𝗵? → No dynamic attribute addition → Inheritance becomes trickier → Can't use with multiple inheritance easily When building ML pipelines with massive datasets, this optimization can be the difference between smooth training and memory crashes. Have you used __slots__ in your Python projects? What memory optimization tricks do you swear by? 🔧 #Python #MachineLearning #PerformanceOptimization
To view or add a comment, sign in
-
-
Python question: Which of the programs in the screenshot runs faster? One would assume that it's the one on the right, because it shards the workload among 4 threads. Here are the results from my machine: Single thread (left): 3.456s Multi threaded (right): 3.561s The single threaded one is actually faster! The reason is the Global Interpreter Lock (GIL). It exists because the CPython runtime is not completely thread-safe. Only one thread is allowed to execute Python bytecode at a time. For CPU-bound work like in the example, this means that at any given time, only one thread can get work done while the others wait for the GIL to be released. At the same time, there is some overhead from context switching, so the multi-threaded variant is even slower. (there even is special handling for completely single-threaded applications so they can ignore the GIL completely) The CPython folks have been working on making the GIL optional (PEP 703). Since 3.13 CPython can be compiled as "free-threaded" (but it's not enabled in the regular distribution). I have tested the above code with the free-threaded version (the easiest way I found to run it was using the uv Docker image). The result with 4 threads using free-threaded Python: 1.074s It finished nearly 4 times as fast as the single-threaded version, so the work indeed is done in parallel now. At some point the free-threaded version will become the default, but as far as I could find out there is no timeline for this yet.
To view or add a comment, sign in
-
-
Consider the following code in Python: def add_item(lst): lst.append(100) a = [1, 2, 3] add_item(a) print(a) What happens here? The correct explanation is: ✅ An in-place modification occurs on the list. Lists in Python are mutable objects, which means they can be modified after they are created. Let’s break it down step by step. 1️⃣ Creating the list When we write: a = [1, 2, 3] Python creates a list object in memory, and the variable a references it: a → [1, 2, 3] 2️⃣ Calling the function When the function is called: add_item(a) The parameter lst inside the function now references the same list object: a → [1, 2, 3] lst → ↑ (same list) ➡️ Both variables point to the same object in memory. 3️⃣ Inside the function Inside the function we execute: lst.append(100) The append() method modifies the list itself. This is called in-place modification, meaning the original list object is updated instead of creating a new one. The list now becomes: [1, 2, 3, 100] 4️⃣ Printing the result Since both a and lst reference the same list, the change is visible through a. Now when we execute: print(a) Output: [1, 2, 3, 100] 📌 Final thought Understanding how variables reference objects in memory is essential when working with mutable data types like lists in Python. #Python #PythonProgramming #Coding #LearnPython #SoftwareDevelopment
To view or add a comment, sign in
-
After ~6 years of building with Python, I still get surprised by how much depth there is in the “basic” stuff. I recently read an article on Python memory management (from stack frames to closures) and it genuinely taught me a lot. A few things that really clicked for me: • Garbage collection vs references (finally cleared up): I’ve used Python’s GC forever, but this article explains the relationship between reference counting, object references, and cycle detection in a way that removed a bunch of mental fog. It made it easier to reason about why objects stick around, when they get freed, and why “it should be collected” isn’t always true the way we casually assume. • Stack frames, scopes, and what actually stays alive: The framing around stack frames helped connect execution flow to memory behavior—especially how local variables, function calls, and references interact under the hood. • Closures, cell objects, and the “ohhh that’s why” moment: I knew closures existed, but I wasn’t aware of cell objects and the mechanism Python uses to keep variables alive for inner functions. That was new to me—and honestly one of those concepts that makes you write better, safer code once you understand it. What I liked most: this is one of the few articles I’ve read recently that covers a very foundational concept—but still manages to teach a lot without getting hand-wavy. If you write Python professionally (or even casually), it’s worth reading—especially if you’ve ever had questions like: • “Why didn’t this object get freed yet?” • “What exactly does GC do here?” • “How do closures really store state?” https://lnkd.in/dZSi27vn #Python #SoftwareEngineering #BackendEngineering #Programming #ComputerScience #MemoryManagement
To view or add a comment, sign in
-
My latest Python project's test suite passed with flying colors. But when I ran it in a live Python session, the package wouldn't build. Why? I'd leaned on Codex 5.4 for much of the development work. I orchestrated, it handled implementation, and it built incredible infrastructure with thorough tests. But it only built support up to Python 3.12, where versions 3.13 and 3.14 (which I was using in my live session) are now available! I see this constantly when developing with LLMs. They produce "cutting-edge work" that only goes up to the limits of their training data. "The latest ACS data from tidycensus" pulled for 2022, where 2024 is now available. Or web maps built with MapLibre 4.0, where the current release is 5.20. This can be resolved by giving LLMs explicit instructions to search the web for the latest versions and data. But that requires a human who knows what to check setting it up. I knew the 2024 ACS is available because I maintain tidycensus. I knew MapLibre was on 5.x because I maintain mapgl. Someone running a fully automated workflow without that domain knowledge wouldn't have caught either gap. LLMs as amplifiers for human expertise yield incredible results. But those "fully automated" AI workflows? They might be producing last year's insights with this year's confidence.
To view or add a comment, sign in
-
Just dropped a new blog: “Python Operators Explained: From Arithmetic to Logical and Comparison” When I was learning Python, I noticed that operators are more than just symbols — they define how your code makes decisions and calculations. Using them effectively can make your programs faster, cleaner, and easier to maintain. In this post, I’ve broken down Arithmetic, Logical, and Comparison operators with simple examples and practical use cases, so beginners can quickly grasp how Python evaluates and compares data. Getting comfortable with operators is a small step that makes a big difference in writing efficient, readable Python code. Innomatics Research Labs #python_programming #Data_Science #Software_Development
Python Operators Demystified: Understanding Arithmetic, Comparison, and Logical Operators medium.com To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development