I recently conducted a benchmark comparing Python 3.13 and 3.14 on the same CPU-heavy task, initially out of curiosity. The results were surprising; the performance difference was significant and has changed my perspective on parallelism in Python. While optimizing a CPU-bound data pipeline, my usual approach was to use ProcessPoolExecutor. Although it effectively handles tasks, the OS-level process spawn cost can accumulate quickly. Python 3.14 introduced a new option: InterpreterPoolExecutor. This allows for multiple isolated Python interpreters within the same process, eliminating GIL conflicts. I benchmarked the performance of Python 3.13 versus 3.14 as follows: ───────────────────────────────────────── 📊 1. HEAVY CPU TASKS (8 tasks, 4 workers) 🔴 Threads: 2.519s (GIL serializes everything) 🟠 Processes: 1.222s (parallel, but costly to spawn) 🟢 Subinterpreters: 1.130s (parallel and lighter) ───────────────────────────────────────── ⚡ 2. STARTUP COST (50 tiny tasks, where it really shows) 🟠 Processes: 0.271s 🟢 Subinterpreters: 0.128s (about 2x faster to start) 📈 3. SCALING (1 → 8 workers) 🔴 Threads: flatlined at ~1.9s (no real scaling benefit) 🟢 Subinterpreters: 2.16s → 0.91s (close to linear scaling) ───────────────────────────────────────── The key takeaway is that we can achieve process-level parallelism with thread-like startup speed, without GIL contention or extra process memory overhead, all within the standard library. Are you still using ProcessPoolExecutor for CPU-bound work? I am genuinely interested in whether subinterpreters could be a practical improvement in your stack. #Python #Python314 #SoftwareEngineering #Performance #Concurrency #BackendDevelopment #DataEngineering
Python 3.14 Subinterpreters Outperform ProcessPoolExecutor in CPU-Heavy Tasks
More Relevant Posts
-
Every line of Python creates objects. But who's tracking them? And what happens when they're no longer needed? Most developers trust Python to "just handle it." The ones who understand how — write faster, leaner, and more reliable code. Here's how it actually works: 1. Pymalloc — Python's own allocator Python doesn't ask the OS for memory every time. It grabs large chunks upfront and carves them into Arenas → Pools → Blocks. Small object allocation stays blazing fast. 2. Reference Counting — First line of defense Every object silently tracks how many things point to it. Count hits zero? Object destroyed instantly. No waiting. No pausing. Most objects never survive past this stage. 3. Garbage Collector — Second line of defense Reference counting has one weakness — circular references. Two objects pointing to each other never hit zero. Python's GC catches these using a generational strategy: Gen 0 — New objects. Collected most often. Gen 1 — Survived once. Collected less often. Gen 2 — Long-lived. Collected rarely. Most garbage dies young. Python bets on it — and wins. 💡 Things worth knowing: __slots__ — Replaces per-object dictionaries with compact fixed structures. Cuts memory by 40-50% for classes with millions of instances. weakref — References that don't keep objects alive. Essential for caches and observer patterns. tracemalloc — Tracks allocations to the exact line. Your best friend for hunting memory leaks in production. Generators over lists — Constant memory vs. allocating everything upfront. Always prefer generators when iterating once. The mindset shift: Python manages memory so you don't have to think about it. But the best developers choose to understand it anyway. Because when you know how objects live and die — every line of code becomes more intentional. What's the nastiest memory bug you've tracked down in Python? #Python #SoftwareEngineering #PythonInternals #PythonTips #CleanCode #BackendDevelopment
To view or add a comment, sign in
-
-
Consider the following code in Python: def add_item(lst): lst.append(100) a = [1, 2, 3] add_item(a) print(a) What happens here? The correct explanation is: ✅ An in-place modification occurs on the list. Lists in Python are mutable objects, which means they can be modified after they are created. Let’s break it down step by step. 1️⃣ Creating the list When we write: a = [1, 2, 3] Python creates a list object in memory, and the variable a references it: a → [1, 2, 3] 2️⃣ Calling the function When the function is called: add_item(a) The parameter lst inside the function now references the same list object: a → [1, 2, 3] lst → ↑ (same list) ➡️ Both variables point to the same object in memory. 3️⃣ Inside the function Inside the function we execute: lst.append(100) The append() method modifies the list itself. This is called in-place modification, meaning the original list object is updated instead of creating a new one. The list now becomes: [1, 2, 3, 100] 4️⃣ Printing the result Since both a and lst reference the same list, the change is visible through a. Now when we execute: print(a) Output: [1, 2, 3, 100] 📌 Final thought Understanding how variables reference objects in memory is essential when working with mutable data types like lists in Python. #Python #PythonProgramming #Coding #LearnPython #SoftwareDevelopment
To view or add a comment, sign in
-
Real Python just published my guide on: How to Use the OpenRouter API to Access Multiple AI Models via Python 🐍 If you’ve ever had to maintain separate integrations for OpenAI, Anthropic, Mistral, and Meta - you know how tedious it gets. OpenRouter solves this with a unified API layer that gives you access to 300+ models from a single endpoint. In this guide, you’ll learn how to: → Authenticate and connect using Python’s requests library (no SDK required) → Route requests to specific providers, sorted by cost, latency, or throughput → Implement model fallbacks so your app stays resilient when a provider goes down This is especially useful for production systems where reliability matters and you can’t afford a single point of failure. 🔗 Read the full article: https://lnkd.in/dvqe-ukw #Python #AI #APIIntegration #MachineLearning #SoftwareDevelopment
To view or add a comment, sign in
-
Slow python ? Meet pybind11 ! It started with a simple curiosity while exploring autograd in Deep Learning frameworks. To better understand how gradients work behind the scenes, we implemented #micrograd by Andrej Karpathy a tiny educational autograd engine written in Python. After building it, a natural question emerged: If Python is slow, how are performance-heavy libraries still so fast? Digging deeper, we discovered that many high-performance Python libraries rely on C++ under the hood, with Python acting as a clean interface. This led us to explore pybind11, a lightweight way to bridge Python and C++ ! We then, - Implemented micrograd in C++ - Exposed it to Python using pybind11 - Generated .so modules using setup.py - Recreated a simplified Python plus C++ architecture This small experiment helped us understand how Python can leverage C++ for performance while maintaining developer productivity.. Also, pybind11 is one of the several alternatives like SWIG, Boost.Python, and Cython that can also be used for language bindings. This exploration was done in collaboration with Kavin Kumar, where we jointly worked on both the medium article and the C++ micrograd implementation. To check our implementation, Github: https://lnkd.in/gz6GBuNV For a deep dive, Explore our article: https://lnkd.in/g2y8KRta PS: Not AI Generated :) #Python #CPP #CPlusPlus #Pybind11 #MachineLearning #DeepLearning #Autograd #Micrograd #PythonPerformance
To view or add a comment, sign in
-
In Python, everything is an object. That one fact broke my mental model this week. I'm transitioning from Software engineering to AI engineering. Week 1, Day 7 🗓️ Here's what clicked 😲. Since everything in Python is an object, every object has a unique memory address. Dict uses this to store and find values in O(1). It hashes the key, jumps directly to the slot. No scanning. No searching. But this only works if the key never changes. A mutable key changes its hash, changes its slot, and your value becomes permanently unreachable. Silent corruption 🤯 . So dict keys must be immutable. And immutable means hashable. Hashable objects carry two methods that work as a pair: __hash__() generates the slot number & __eq__() confirms the exact key at that slot. Override one without the other and Python punishes you: ``` class BrokenDog: def __eq__(self, other): return self.name == other.name # forgot __hash__ hash(BrokenDog("Rex")) # ❌ TypeError: unhashable type ``` But Python's built-in types like dict and str already override __eq__() to compare contents and not addresses. That's why two dicts with identical keys and values return True on `==`. One more thing that surprised me. Since every object in Python carries this overhead of creation, hashing, and storage, even a simple `int` is not free. That's exactly why NumPy exists to escape Python's object model and work directly with raw memory for numerical computation. For more details, refer: https://lnkd.in/gZCGFRBB What's the Python concept that surprised you most when you first learned it? #AIEngineering #Python #LearningInPublic #CareerTransition #SoftwareEngineering
To view or add a comment, sign in
-
💡 Understanding Default Parameters in Python While working with functions in Python, default parameters can make our code more flexible and easier to use. Let’s look at this simple example: def func(a, b=2, c=3): return a + b * c print(func(2, c=4)) 1️⃣ Step 1: Function Definition The function func has three parameters: • a • b with a default value of 2 • c with a default value of 3 ➡️ This means that if we call the function without providing values for b or c, Python will automatically use their default values. 2️⃣ Step 2: Calling the Function func(2, c=4) Here is what happens: • The value 2 is assigned to a. • We did not pass a value for b, so Python uses the default value 2. • We explicitly passed c = 4, which overrides the default value 3. So the values inside the function become: a = 2 b = 2 c = 4 3️⃣ Step 3: Evaluating the Expression The function returns: a + b * c Substituting the values: 2 + 2 * 4 According to Python’s order of operations, multiplication happens before addition: 2 + 8 = 10 ➡️ Final Output: 10 🔹 Important Concept Default parameters allow functions to work with optional arguments. They make functions more flexible, cleaner, and easier to reuse. #Python #Programming #AI #DataAnalytics #Coding #LearnPython
To view or add a comment, sign in
-
🧠 Python Concept That Explains Why += Can Mutate: In-place vs New Objects (__iadd__) Why does this behave differently? 👀 a = [1, 2] b = a a += [3] print(a) # [1, 2, 3] print(b) # [1, 2, 3] But: x = (1, 2) y = x x += (3,) print(x) # (1, 2, 3) print(y) # (1, 2) Same += … different result 🤯 🤔 The Reason: __iadd__ Python tries: 1️⃣ __iadd__ (in-place add) 2️⃣ else → __add__ (new object) 🧪 Lists implement __iadd__ list.__iadd__(self, other) So list is modified in place. 🧪 Tuples don’t So Python creates a new tuple. 🧒 Simple Explanation List = clay 🧱 You reshape same clay. Tuple = brick 🧱 You must make new brick. 💡 Why This Matters ✔ Mutability understanding ✔ Side-effects bugs ✔ Performance ✔ Data structures ✔ Interview classic ⚡ Key Insight id(a) == id(a += ...) True for mutable types False for immutable types 💻 In Python, += doesn’t always mean “new value”. 🐍 Sometimes it means “modify in place” 🐍 The difference comes from __iadd__. #Python #PythonTips #PythonTricks #AdvancedPython #CleanCode #LearnPython #Programming #DeveloperLife #DailyCoding #100DaysOfCode
To view or add a comment, sign in
-
-
I’ve just published my first blog post on Medium, diving into a Python concept I recently explored: 👉 Mutable vs Immutable Objects Before this, I used to think variables simply “store values.” But in Python, they actually reference objects in memory. Having learned C previously really helped me grasp this concept more quickly, especially when it comes to understanding how memory and references work. In this post, I break down: • The difference between == and is • Mutable vs immutable objects (with clear examples) • Why l1 = l1 + [4] and l1 += [4] behave differently • How Python passes arguments to functions Writing about what I learn pushes me to go deeper and truly understand the concepts. If you’re learning Python or preparing for technical interviews, this is definitely a topic worth mastering. 📖 Read here: https://lnkd.in/guVnYN3Q #Python #SoftwareEngineering #Programming #LearningJourney #Tech
To view or add a comment, sign in
-
Day 20 of My Python Learning Journey Today I learned about Shallow Copy and Deep Copy in Python. 📌 Shallow Copy A shallow copy creates a new object, but the nested objects inside it are still referenced from the original object. So, changes in nested elements will affect both copies. Example: import copy list1 = [[1,2,3],[4,5,6]] shallow = copy.copy(list1) shallow[0][0] = 100 print(list1) print(shallow) 📌 Deep Copy A deep copy creates a completely independent copy of the original object including all nested objects. Changes in one object will not affect the other. Example: import copy list1 = [[1,2,3],[4,5,6]] deep = copy.deepcopy(list1) deep[0][0] = 100 print(list1) print(deep) ✅ Key Difference Shallow Copy → Copies only the outer object Deep Copy → Copies outer object + all nested objects Learning these concepts helps in understanding memory handling and object references in Python #Python #PythonLearning #Day20 #CodingJourney #Programming #LearningEveryday
To view or add a comment, sign in
-
Python Concept: Shallow Copy a = [[10,20],[30,40],[50,60]] b = a.copy() print(id(a)) # Address 1000 print(id(b)) # Address 2000 a[0][0] = 200 a[0][1] = 100 print(a) # [[200,100],[30,40],[50,60]] print(b) # [[200,100],[30,40],[50,60]] Even though a and b have different memory addresses, changes in nested elements affect both. This happens because copy() makes a shallow copy, meaning the inner objects are still shared. Be careful when working with nested lists.
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development