'Long live the GIL, You will be missed' The option to disable the GIL in Python 3.14 is a potential game-changer for performance. You can now optionally disable the Global Interpreter Lock (GIL) by using the `-X nogil` flag. 🔒 With the GIL Think of the GIL as a master key for your program. Only one thread can hold this key to execute Python code at any given time. This creates a bottleneck for CPU-bound tasks, even on multi-core processors. 🚀 Without the GIL (-X nogil) The "master key" is gone. Multiple threads can now execute Python code on separate CPU cores simultaneously. This provides true parallelism and can significantly speed up your CPU-bound code. The Big Catch: Race Conditions The GIL inadvertently protected us from many concurrency bugs. Without it, we are fully responsible for ensuring thread safety. This code is NOT safe in no-GIL mode: import threading n = 0 # shared counter def increment(): global n # This looks like one step, but it's three: # 1. Read the value of n # 2. Add 1 to the value # 3. Write the new value back to n n += 1 # Two threads will race to update n, and # updates will be lost. The final result can potentially be incorrect. ✅ Fix: Use a Lock You must explicitly protect shared data with a `threading.Lock` to make the update atomic, meaning uninterruptible. import threading n = 0 lock = threading.Lock() # Lock to protect data def safe_increment(): global n with lock: # Only one thread can be in this block at a time n += 1 #Python #GIL #Performance #Concurrency
"Disabling GIL in Python 3.14: A Double-Edged Sword"
More Relevant Posts
-
𝗣𝘆𝘁𝗵𝗼𝗻 𝗰𝗮𝗻 𝗳𝗶𝗻𝗮𝗹𝗹𝘆 𝘂𝘀𝗲 𝗮𝗹𝗹 𝘆𝗼𝘂𝗿 𝗖𝗣𝗨 𝗰𝗼𝗿𝗲𝘀. For years, the Global Interpreter Lock (𝗚𝗜𝗟) was Python’s biggest limitation for CPU-bound tasks. Even if you had 8 cores, only one thread could truly execute Python code at a time, others just waited for their turn. Now, with 𝗣𝘆𝘁𝗵𝗼𝗻 𝟯.𝟭𝟰, that changes. The new free-threaded (no-GIL) interpreter finally lets multiple threads run Python code simultaneously, even for CPU-heavy workloads. So what actually changed inside 𝗖𝗣𝘆𝘁𝗵𝗼𝗻 to make this possible? Let’s look at the differences. 𝗢𝗯𝗷𝗲𝗰𝘁 𝗹𝗶𝗳𝗲𝘁𝗶𝗺𝗲: • 𝗢𝗹𝗱: One thread at a time, refcounts were safe by default. • 𝗡𝗲𝘄: Refcounts are atomic. Common objects like None and True are immortal, no locking, no slowdown. 𝗟𝗼𝗰𝗸𝗶𝗻𝗴 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝘆: • 𝗢𝗹𝗱: One giant GIL for everything. • 𝗡𝗲𝘄: Many tiny locks. Each subsystem guards itself like type caches, allocators, GC. Threads finally run side by side. 𝗚𝗮𝗿𝗯𝗮𝗴𝗲 𝗰𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻: • 𝗢𝗹𝗱: Stop the world, clean up, resume. • 𝗡𝗲𝘄: Each generation has its own lock. GC can quietly run while your code keeps executing. 𝗜𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗲𝗿 𝘀𝘁𝗮𝘁𝗲: • 𝗢𝗹𝗱: Shared global state like builtins, modules, caches all tangled together. • 𝗡𝗲𝘄: Each interpreter has isolated state. Subinterpreters can run truly in parallel. 𝗖 𝗲𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻𝘀: • 𝗢𝗹𝗱: Every extension assumed the GIL existed. • 𝗡𝗲𝘄: A new free-threaded C API + atomic helpers make extensions thread-safe again. 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲: • 𝗢𝗹𝗱: One thread per process, no matter how many cores. • 𝗡𝗲𝘄: Slightly slower single-thread, but real parallel speedups for CPU-bound workloads. Now, Python can finally breathe across all cores. — 𝐏𝐲𝐂𝐨𝐝𝐞𝐓𝐞𝐜𝐡 #Python
To view or add a comment, sign in
-
I have rebuilt a core feature of my backtesting engine as a standalone Python script. It transforms raw market data derived from a broker into a cleaner dataset ready for backtesting/research. Features: 1) Automated MT5 Ingestion: Fetches and processes 28 major FX pairs(customizable). 2) Exhaustive BFS Triangulation: Intelligently reconstructs missing prices from related pairs. 3) No Lookahead Bias: Uses forward-fill (ffill) exclusively to bridge the gap in NaN timestamps. Any timestamp that cannot be filled remains NaN, preventing any form of lookahead bias. 4) Full Provenance: Every bar is tagged with its source (ORIGINAL, TRIANGULATED, FFILL). The attached photo is the final output that visualizes the original market data versus what was filled via triangulation, interpolation, or remained as NaN.
To view or add a comment, sign in
-
-
Everyone's been posting about TOON as an alternative to JSON for LLM structured outputs. JSON is still better for one reason: Most major LLM providers allow the user to enforce structured outputs, by just feeding a JSON schema to the api call. This removes the risk of formatting issues, allows for complex output schemas without parsing errors, and doesn't require giving schema examples in the prompt, which would be difficult for complex schemas anyway. TOON might be better in terms of token efficiency, but it's nowhere near JSON in terms of support and integration. Just like Julia is a faster language than python, but no one would adopt it.
To view or add a comment, sign in
-
🚀 𝐏𝐲𝐭𝐡𝐨𝐧 𝟑.𝟏𝟒 - 𝐓𝐡𝐞 𝐧𝐞𝐰 𝐅𝐫𝐞𝐞-𝐓𝐡𝐫𝐞𝐚𝐝𝐢𝐧𝐠 𝐦𝐨𝐝𝐞𝐥 𝐚𝐧𝐝 𝐰𝐡𝐲 𝐢𝐭 𝐦𝐚𝐭𝐭𝐞𝐫𝐬 🚀 If you’ve been writing Python for a while, you’ve probably bumped into the limitations of the Global Interpreter Lock (GIL). The GIL means that even on a multi-core machine, threads in one Python process can’t execute Python bytecode truly in parallel. Only one thread runs at a time ! With Python 3.14, the “𝒇𝒓𝒆𝒆-𝒕𝒉𝒓𝒆𝒂𝒅𝒆𝒅” or “no-GIL” build is officially supported. That means you can opt-into a version of CPython where the GIL is disabled and threads can truly run in parallel across multiple CPU cores. ⚠️𝐖𝐡𝐚𝐭’𝐬 𝐭𝐡𝐞 𝐆𝐈𝐋? In previous Python versions, the Global Interpreter Lock (GIL) ensured only one thread could really execute Python bytecode at a time, so even on multi‐core hardware, threads couldn’t fully run in parallel. 💡𝐖𝐡𝐚𝐭 𝐜𝐡𝐚𝐧𝐠𝐞𝐬 𝐰𝐢𝐭𝐡 𝐟𝐫𝐞𝐞-𝐭𝐡𝐫𝐞𝐚𝐝𝐢𝐧𝐠? - Threads can now truly run in parallel on multiple cores when using a free-threaded build of Python (python3.14t) - This opens up real gains for CPU-bound, multithreaded Python workloads. - Existing Python libraries written in thread-safe way, should work without modification and utilize all cores of CPU. - C extension that has not been explicitly marked as free-thread-safe, it will re-enable the GIL for the lifetime of that process. 🔍𝐁𝐨𝐭𝐭𝐨𝐦 𝐥𝐢𝐧𝐞 If your Python apps care about multi-core performance or threading, this update is worth watching (or even experimenting with). It’s a strong signal that Python is leveling up its concurrency game, and making it easier for developers to build more scalable, high-performance systems. #Python #Python314 #Concurrency #Multithreading #GIL #SoftwareEngineering #DevCommunity
To view or add a comment, sign in
-
-
⚡ Caught RCE in a “Python‑y” HTTP service, escalated to root via a hidden endpoint. Here’s the path I took. What happened - 🧩 Odd HTTP response hinted at Python exec on the backend - 🔒 Minimal surface: only 22 and 80 - 🧪 Switched from poking to controlled Python payloads Steps I took - 🔎 Recon: nmap showed 22, 80. Web hinted Python evaluation. - 🐍 Code exec: Python to list dirs and read files. - 🔐 Creds find: Found /opt/dev/.git, pulled cached GitHub creds. - 🖥️ Stable access: SSH with the working combo. - ♻️ Rollback: git restore revealed older logic. - 🕵️ Discovery: switch‑case with hidden “admin” and “shell”. - 🎯 Endpoint fuzz: Scripted tests → “admin” asked for password. - 🧰 Password fuzz: Rockyou slice found the valid admin password. - 👑 Root path: “Welcome Admin!!!” → send “shell” → execute to finish. Why it worked - 🗃️ Version archaeology: Old code told truths current code hid. - 🔎 Clues > guesswork: Followed artifacts and protocol behavior. - 🎯 Depth over breadth: Two ports, go deep. Takeaways - 📂 Check .git artifacts for configs and deleted files. - 🧵 Fuzz intent, not just paths—custom protocols hide admin flows. - 🧪 Validate assumptions with small, targeted scripts. Tools and snippets - 🧪 nmap, Python socket fuzzers - 📝 Python one‑liners for enum and reads - 🔧 git restore to recover prior logic Result 🚀 Odd HTTP behavior → code exec → creds → hidden endpoint → admin shell → root. Clean chain. Minimal noise. What would you test next on a service that hints at Python eval? 💭 https://lnkd.in/eQWPuPg3
To view or add a comment, sign in
-
🚀 Python 3.14 is here — and it’s packed with great upgrades! Released in November 2025, this version brings some of the most exciting improvements 👇 • 🧠 Deferred annotations by default – no more from __future__ imports for type hints. • 🧩 t-strings (t"") – a new kind of string literal for safer and more flexible templating. • 🖥️ Modern REPL – now with syntax highlighting, smarter autocomplete, and clearer error messages. • 🧵 Multiple interpreters – via the new concurrent.interpreters module for true process isolation. • ⚙️ Optional “no-GIL” build – experimental version that removes the Global Interpreter Lock for real multi-core parallelism. • 🗜️ Zstandard compression, UUID v6–8, optional brackets in except, built-in HMAC via HACL*, colored module output, and an experimental JIT compiler for performance gains. This release shows how far Python has come — from typing improvements to real concurrency and even JIT compilation. Exciting times ahead for developers and teams building modern apps in Python! 🐍✨ #Python #Python314 #Programming #SoftwareEngineering #Developers #DevOps #OpenSource #Microservices #TechUpdate
To view or add a comment, sign in
-
Hello Everyone 👋 , 🤔 Ever wondered why Python has a Queue when we already have lists? When handling tasks, messages, or data exchange between threads — it’s tempting to just use a normal list. But that simple choice can lead to chaos: race conditions, data loss, or weird timing bugs. Here’s the truth 👇 🔹 List — fast and flexible, but not thread-safe. 🔹 Queue — built for safe, synchronized data sharing between threads. With queue.Queue, you get: ✔️ Automatic locking and blocking ✔️ Thread-safe task handling ✔️ Smooth producer-consumer flow So next time your threads need to share work — don’t use a list. Use a Queue and let Python handle the hard part. 🧵 💬 Checkout attached docs for example. #contact: navinkpr2000@gmail.com #Python #Multithreading #Queue #ThreadSafety #CodingTips #PythonDeveloper #Concurrency #AsyncProgramming #CodeBetter #Developers #crewxdev
To view or add a comment, sign in
-
𝐏𝐲𝐭𝐡𝐨𝐧 𝟑.𝟏𝟒 𝐢𝐬 𝐡𝐞𝐫𝐞! 𝟑 𝐦𝐚𝐣𝐨𝐫 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐲𝐨𝐮 𝐬𝐡𝐨𝐮𝐥𝐝 𝐤𝐧𝐨𝐰 Python 3.14 isn’t just a minor update — it brings real changes for performance, safety, and everyday workflow 👇 1️⃣ 𝐅𝐢𝐧𝐚𝐥𝐥𝐲, 𝐓𝐫𝐮𝐞 𝐏𝐚𝐫𝐚𝐥𝐥𝐞𝐥𝐢𝐬𝐦 𝐁𝐞𝐲𝐨𝐧𝐝 𝐭𝐡𝐞 𝐆𝐈𝐋 (𝐏𝐄𝐏 𝟕𝟑𝟒) For decades, the Global Interpreter Lock limited true multi-core concurrency. Now, the new concurrent.interpreters module lets you run multiple independent interpreters in one process, each with its own GIL. That means genuine multi-core parallelism for CPU-bound tasks — with less overhead than multiprocessing. It’s not a full “GIL-off” yet, but it’s a huge leap forward. 2️⃣ 𝐒𝐦𝐚𝐫𝐭𝐞𝐫 𝐒𝐭𝐫𝐢𝐧𝐠𝐬 𝐰𝐢𝐭𝐡 𝐭-𝐬𝐭𝐫𝐢𝐧𝐠𝐬 (𝐏𝐄𝐏 𝟕𝟓𝟎) A t-string like t"Hello, {name}!" doesn’t return a plain string but a structured Template object. Libraries can now inspect or transform each part before rendering — perfect for templating, localization, or escaping. F-strings stay, t-strings add new power. 3️⃣ 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞 𝐔𝐩𝐠𝐫𝐚𝐝𝐞𝐬 🖍️ The REPL now has syntax highlighting by default. 🧠 Error messages suggest fixes for typos (e.g., whille → while?). A release that modernizes Python without losing its soul. 🔗 Source: https://lnkd.in/dYRWBe7z Sesamo #Python #Python314 #Programming #SoftwareEngineering #DataScience #Developer #TechUpdate
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development