the reason i started programming in python was for it's simplicity, but with maturity, it seems python has some big inherent flaws which is going nowhere soon. the biggest: GIL (global interprator lock) - this limits the actual true parllelism unlike Java or Golang or any other compiled languages for that instance. the fact that no matter how many threads you add, due to locking it's only going to increase overhead for cpu-bound tasks rather reducing it is baffling and complete wastage of resources. if someone is interested towards building high performance systems or atleast interested in dwelling with the idea of building one, Python as a language seems to be bottleneck. and i'm a firm believer of whatever being built these days in the name of AI is merely API calls, that can be replicated using rather high performance programming languages, unless and until things are not dependent on open source ecosystem in case you're dealing with core machine learning and deep learning. one can argue that, oh, GIL nowhere is effecting IO bound tasks, and in case we're building using tensorflow, pytorch or cuda the underlying hood is almost always c++ code that's being executed. but i would argue, that still limits our performant systems, and why have something inferior when you can have something superior. challenges with ecosystem is understandable to be honest, not everything is measured in terms of raw speed, rather business impact as well. i so wish the entire thing can be changed. it's too late for now i assume! cpython3.13 implementation has experimental version with no GIL, but best of luck using it in production, only god knows what bugs it comes with.
Kunal Kumar’s Post
More Relevant Posts
-
I heard a tip to use Rust instead of Python whenever you are coding with AI due to the speed and more importantly the validation. The code wont compile if there are errors. Unlike writing in the Python where you have to do the validation for AI and go back and forth with prompts to fix it. I'm finding it way faster to generate to code. Even though I dont know Rust that well it will be a great learning experience. Right now I'm using Claude Code but I might switch back to OpenCode's models again to see if that works. https://lnkd.in/gaDkaHXu
To view or add a comment, sign in
-
Tonight, I'm a fortune teller and I predict that in the next 3-5 years Rust will take over from languages such as Python. Agent coding is not stopping. Python has runtime errors and no amount of unit testing from an agent is going to keep up with that. Rust on the other hand, if you completely ignore the fact that it's compiled, has no runtime/VM dependency, real multithreading (no fake Python GIL), no GC, WebAssembly, Linux kernel adoption, memory safety, no surprise memory leaks, etc. The main thing that will drive it to dominate as a programming language is its compiler. You could argue that more popular languages like Python mean AI agents will know it better than Rust due to how much of it they've seen. But the compiler creates a very quick feedback loop that an agent uses to produce error free code during compile time and not at runtime. Don't believe me? Try it yourself. The Python ML stack argument doesn't hold up either. Researchers don't want to program, they want to research. They're already using agents to write their code and when that's the case, they don't care what language it's in. The ML ecosystem follows where reliable code gets built, and no amount of Python training data solves a runtime error lottery. If you disagree with this, give me an argument against Rust. I say this as someone with decades of Python development. I wish it wasn't so, but I can't find a compelling reason not to branch out into Rust, and the timing with agent coding seems right.
To view or add a comment, sign in
-
Habemus smooth in python! For those not familiar with it, smooth is an implementation of Single Source of Error (SSOE) time series forecasting models (ETS, ARIMA and many more) by Ivan Svetunkov. I first came across the package while I was checking the benchmarks for the M5 competition (https://lnkd.in/eKYD7hMG) and then also at work where it was the preferred forecasting package for an old yet durable project. Contrary to most forecasting libraries that prioritise ease of use, smooth prioritises flexibility and control, making it possible to take full advantage of the models capabilities. Needless to say, I like it! Smooth was developed in R, with a lot of C++ code doing the heavy lifting. Within the forecasting community, "smooth in python" became the equivalent of "play Freebird!" (looking at you Nicolas Vandeput). So at some point three to four years ago I texted Ivan and told him that I'll help him port it to python. With the typical confidence that comes with ignorance, I thought it would be a piece of cake! After sorting out all the plumbing between C++ and python, we hit a big wall, tens of thousands of complex R code that had to be translated into python. Luckily we had an ace up our sleeve, Filotas Theodosiou. While most programmers, including me, were arguing about the effectiveness of coding with AI, Filotas was already ahead of the curve and was using his great powers to demolish that wall. Turns out you need all the AI help you can get to produce a fraction of the output of a young Ivan, up in Lancaster doing his PhD. Long story short, a few years later, we have the first python release of smooth on pypi (https://lnkd.in/eKmztdih). We still have a lot of work to do to get to full feature parity with the R version, but we're getting there. For more details and some benchmarks, check the post on Ivan's blog -> https://lnkd.in/ePDy9G8F PS: Special thanks to Ralph Urlus for developing and maintaining CARMA (https://lnkd.in/eMx3d7ns) which was a drop-in replacement for RcppArmadillo and made the whole thing possible!
To view or add a comment, sign in
-
-
Python is often praised for its simplicity and developer productivity, but what makes it particularly interesting is how much of its core actually runs on C through the CPython implementation. That design introduces a set of tradeoffs that are easy to overlook but important to understand. At a high level, Python gives you a clean, expressive syntax while delegating heavy lifting—such as memory management, object handling, and built-in data structures—to optimized C code. This is why operations on lists, dictionaries, and built-in functions often perform much better than equivalent logic written in pure Python loops. However, this abstraction comes at a cost. When you write Python code, especially iterative or CPU-bound logic, it is still interpreted and does not benefit from the same level of optimization as compiled C code. This creates a noticeable gap between “Python-level” performance and “C-backed” operations within the same program. The Global Interpreter Lock (GIL) is another direct consequence of this design. It simplifies memory management and ensures thread safety within CPython, but it also prevents true parallel execution of CPU-bound threads. As a result, developers often have to rely on multiprocessing or external libraries to fully utilize multi-core systems. On the positive side, Python’s tight integration with C makes it highly extensible. Performance-critical components can be offloaded to C or leveraged through libraries like NumPy and Pandas, which internally use optimized native code. In practice, many high-performance Python applications are structured as orchestration layers in Python, with execution-intensive parts handled elsewhere. The key takeaway is that Python is not inherently “slow” or “fast”—it depends on where and how the work is being done. Understanding the boundary between Python and its underlying C implementation allows you to make better architectural decisions, balancing readability, maintainability, and performance.
To view or add a comment, sign in
-
-
If you try to scale concurrency in Python like Go… your system will slow down before it scales. This isn’t about which language is better. It’s about how each language was designed to handle concurrency. And that difference shows up the moment your backend starts handling real traffic. Let’s start with Python. Python supports concurrency through: - threads - async (asyncio) But there’s a fundamental limitation: - The Global Interpreter Lock (GIL). The GIL allows only one thread to execute Python bytecode at a time. So even if you create multiple threads: they don’t truly run in parallel (for CPU work) they take turns executing This makes concurrency in Python: harder to scale for CPU-heavy tasks dependent on workarounds like multiprocessing more complex to reason about in real systems Golang was built with concurrency at its core. Instead of threads, it uses goroutines: lightweight cheap to create managed by the Go runtime You can run thousands — even millions — of concurrent tasks without worrying about system overhead. With Go: - concurrent code looks like normal code - channels make communication explicit - timeouts and cancellations are built-in patterns - concurrency is easier to reason about at scale Then comes the scheduler. - Go uses an M:N scheduler: many goroutines mapped to a few OS threads This allows Go to: utilize multiple CPU cores efficiently switch tasks quickly handle high-load systems predictably Python, because of the GIL, doesn’t achieve this without spawning multiple processes. Go makes it easier to build: high-concurrency APIs scalable backend systems predictable distributed services. Python excels at: rapid development AI/ML workloads flexibility #Golang #Python #BackendDevelopment #SystemDesign #Concurrency #SoftwareEngineering #DistributedSystems
To view or add a comment, sign in
-
Multiprocessing in Python by Harvard Register for the Quantitative Finance Cohort to learn in-depth Quantitative Finance, enquire now:- https://lnkd.in/g9f3cm8N Python’s standard interpreter, CPython, has a well known limitation called the Global Interpreter Lock (GIL). The GIL ensures that only one thread executes Python bytecode at a time within a single process. While this simplifies memory management, it limits the effectiveness of multithreading for CPU bound workloads. This is where multiprocessing becomes important. The Python multiprocessing module allows programs to create multiple independent processes, each with its own Python interpreter and memory space. Because each process has its own GIL, true parallel execution across multiple CPU cores becomes possible. In practice, multiprocessing is useful when dealing with CPU intensive tasks such as numerical simulations, Monte Carlo methods, data processing pipelines, or large scale backtesting frameworks. By distributing work across multiple cores, overall execution time can be significantly reduced. A common abstraction in multiprocessing is the process pool. Using objects such as Pool, a developer can distribute a function across many input values and let the operating system schedule execution across available cores. This makes parallelization relatively straightforward without manually managing each process. However, multiprocessing introduces trade-offs. Since processes do not share memory by default, communication must happen through mechanisms such as queues, pipes, or shared memory objects. This can introduce overhead, particularly when transferring large datasets between processes. Another practical consideration is process start-up cost. Creating processes is heavier than creating threads, so multiprocessing tends to work best for large tasks with meaningful computation time, rather than very small tasks. Register for the Quantitative Finance Cohort to learn in-depth Quantitative Finance, enquire now:- https://lnkd.in/g9f3cm8N
To view or add a comment, sign in
-
🚀 Python’s Concurrency Era Is Changing — Are You Ready? For decades, the Global Interpreter Lock (GIL) has been one of Python’s most debated design choices. With Python 3.12, the GIL is still very much part of the runtime. But Python 3.13 introduces something that could reshape how we think about Python performance: an *optional* GIL-free experiment. Let that sink in. This isn’t just a version upgrade — it’s a philosophical shift. 🔍 What’s actually happening? Python 3.12: Continues with the traditional GIL model — predictable, stable, and battle-tested. Python 3.13: Introduces an experimental no-GIL build, allowing true parallel execution of threads. 💡 Why this matters For years, Python developers have worked around the GIL using multiprocessing, async programming, or offloading to C extensions. Now, Python is exploring a future where those workarounds may not always be necessary. ⚖️ Pros of a GIL-free Python (3.13 experimental) ✅ True Multithreading CPU-bound tasks can finally run in parallel without jumping through hoops. ✅ Simpler Mental Model (in some cases) Less need to decide between threads vs processes for performance. ✅ Better Hardware Utilization Modern multi-core systems can be leveraged more effectively. ⚠️ Cons & Trade-offs ❌ Performance Overhead Removing the GIL introduces complexity — single-threaded performance may take a hit. ❌ Ecosystem Compatibility Many existing libraries assume the presence of the GIL. Transition won’t be instant. ❌ New Class of Bugs Race conditions and synchronization issues will become more common for Python developers. 🧠 The Bigger Insight This is not about “GIL = bad” or “No GIL = good.” It’s about *choice*. Python is evolving from a one-size-fits-all runtime into a more flexible platform that acknowledges diverse workloads — from scripting to high-performance computing. 📌 What should you do as a developer? * Don’t rush to rewrite everything. The no-GIL version is still experimental. * Start understanding concurrency deeply — the future will reward it. * Keep an eye on library support and benchmarks before adopting. The GIL debate isn’t ending — it’s entering its most interesting phase yet. #Python #SoftwareEngineering #Concurrency #TechTrends #Programming #Threading
To view or add a comment, sign in
-
-
Wrote code in Sublime, just regular old Python autocomplete and sytnax highlighting. Asked claude to challenge ME to write some code (JS world ~10 years, not much python since college). When I didn't know something, I looked at documenation. I'll tell ya what, my solution was 💩 But it worked. I had fun. I felt achievement. I was reminded in Python dict on a list of 2-item lists it pops them out as nice key-value pairs. Re-learned that line.split() works on any amount of whitespace in strings, nothing like that in JS. List comprehension after I submitted my dooky solution. Practicing string manipulation and data structures feels important.... How else do we learn to put better things in and get better things out? Maybe with future models -- it doesn't matter, the AI is just better than you at everything from database design to dev ops. For now, I think there's still a reason to hone your craft, and a reason they have a bunch of PHDs building these models.
To view or add a comment, sign in
-
Why Python Had the Global Interpreter Lock (GIL) Initially? 1. Reference Counting Memory Management Python uses reference counting to manage memory. Every object tracks how many variables point to it. The problem was with threads, without the GIL, you'd need a lock on every single object to protect its reference count — which would be enormously complex and slow. The GIL solves this with one simple lock instead of millions of per-object locks. 2. Historical Context — 1991 When Guido van Rossum created Python: a. Multi-core CPUs didn't exist for consumers — single core was the norm b. Threading was rare and mainly used for I/O, not CPU parallelism c. The GIL had zero real cost in that environment d. Making CPython simple and portable was the priority 3. It Made C Extensions Easy & Safe Python was designed to be easily extensible with C. The GIL meant: a. C extension authors didn't need to worry about thread safety b. Massive ecosystem of C extensions grew up assuming GIL protection Now Python has introduced Free-Threaded mode. What is Free-Threaded Mode? Free-Threaded mode (introduced experimentally in Python 3.13) is a build of CPython that removes the GIL, allowing multiple threads to run Python code truly in parallel across multiple CPU cores. It's also called "NoGIL" mode. Caveats & Risks a. Thread safety is now your job — you need locks/mutexes for shared data b. C extensions may not be compatible — many assume the GIL protects them c. ~5–10% slower in single-threaded code due to fine-grained locking overhead d. Popular libraries (NumPy, etc.) are gradually adding free-threaded support Why It Matters This is one of the biggest changes in Python's history. It opens the door to: a. True parallel data processing b. Better performance on multi-core servers c. Python competing more seriously with Go/Rust for concurrent workloads Note: Python version 3.14 is officially supported or you can say more stable for Free-Threaded Mode as compare to Python 3.13 which is considered as Experimental for NoGIL.
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development