GILs aren't just for fish. 🐟 Sometimes they're for snakes too. 🐍 Python's Global Interpreter Lock prevents parallelism. Python is slow. Python is single-threaded. So why does Python dominate big data? 🔻 Machine learning? PyTorch, TensorFlow (Python) 🔻 Data analysis? pandas, NumPy (Python) 🔻 Big data pipelines? PySpark, Dask (Python) This makes no sense! 😶🌫️ Big data demands massive parallelism, yet Python's GIL prevents exactly that. Here's the uncomfortable truth: Python doesn't process your data. NumPy, pandas, Polars, and PySpark do - and they don't have the GIL's limitations. When you write df.groupby('category').sum(), you're not running Python loops. You're calling optimized C/Rust code that releases the GIL and runs across all your CPU cores in parallel. 🗂️ What's inside: 🔹 How the GIL works (and why it exists) 🔹 The orchestration layer pattern 🔹 How NumPy, pandas, and Polars bypass the GIL 🔹 Fanout-on-write vs fanout-on-read strategies 🔹 When the GIL actually matters (and workarounds) 🔹 Python 3.13's experimental no-GIL mode The pattern is simple: Python coordinates. C/Rust/JVM executes. This isn't a workaround, it's architectural brilliance. 📚 Read the full article: https://lnkd.in/gWRuqg74 ❔ Have you encountered GIL-related performance issues in your Python projects? How did you solve them? ❔ #Python #DataEngineering #BigData #SoftwareEngineering #GIL #Performance #NumPy #Pandas #PySpark
Python's GIL: How NumPy, pandas bypass parallelism limitations
More Relevant Posts
-
Tips for Optimizing the performance 1. The moment you create a standard Python object (an instance of a class), Python reserves a dictionary (__dict__) to store its attributes. This is flexible, but insanely memory-inefficient. For data-heavy classes or dataclasses, defining __slots__ is non-negotiable for memory and access speed. Especially if you data is not too dynamic, use slots. 2. If you’re inside a loop and using the + operator to build a long string iteratively, you are re-creating a new string object in memory on every single iteration. That’s an $O(n^2)$ operation on memory. The correct way is to use a list as a buffer and then join everything at the end. This is a single, optimized operation. 3. When you call a function inside a tight loop, say re.match or math.sqrt, the Python interpreter has to look up that function's name in the global namespace on every single loop iteration. You can bypass this lookup cost by binding the method to a local variable before the loop starts. # Look up math.sqrt ONCE local_sqrt = math.sqrt for x in numbers: y = local_sqrt(x) # <-- Fast local variable access 4. When you use list.pop(0) to remove the first element, Python has to shift every other element in the list one position over in memory. This is an $O(n)$ operation—it gets slower the larger your list is. The collections.deque (Double-Ended Queue) object is implemented specifically to handle $O(1)$ appends and pops from both ends. 5. Memoization is the act of caching a function’s result based on its arguments. If you call my_function() and then call it again, why should it re-calculate? The functools.lru_cache decorator handles this for you perfectly and is one of the single most effective performance hacks in Python. It’s a literal one-line latency fix. 6. When you use an iterator/generator, you don’t load the whole dataset into memory. You process one item, then discard it, and only request the next item as needed. This applies to custom functions, too. If you can use yield instead of return in a function that builds a large list, you have converted an expensive memory operation into a cheap iterator. Prefer generators and iterators (using yield or for line in file:) over loading entire datasets into memory. It saves memory and execution time. 7. Classic oversight for anyone doing significant text parsing (logs, web scraping, data cleaning). If you call re.match() or re.search() inside a loop, the Python re module has to spend time compiling the regular expression pattern into an efficient bytecode representation on every iteration. Compile it once, outside the loop. # Compiling the pattern happens ONCE PHONE_PATTERN = re.compile(r'\d{3}-\d{3}-\d{4}') for text in all_documents: match = PHONE_PATTERN.search(text) 8. The % formatting and the C-style str.format() involve more internal lookup and overhead. F-strings are the fastest way to format strings in modern Python.
To view or add a comment, sign in
-
"Python Mini-Series Wrap-Up: What writing production-ready Python really looks like" Over the last few posts, I shared a short Python mini-series focused on how Python is actually used in analytics and data engineering — beyond tutorials and toy examples. The core idea across the series was simple: Python becomes valuable when it’s structured, trusted, and built to scale. Here’s what I covered: • Post 1 – Structure: Treat Python work like a pipeline, not a one-off notebook • Post 2 – Unstructured data: Turning PDFs and messy text into structured datasets with regex • Post 3 – Trust: Making data quality a first-class citizen through validation and checks • Post 4 – Scale: Writing faster, more memory-efficient code with vectorization and smart data types • Post 5 – Maturity: Early mistakes that taught me why reproducibility and structure matter None of this is flashy — and that’s the point. These are the habits that turn Python scripts into workflows teams can rely on, and analyses into outputs stakeholders actually trust. If you’re early in your data career, you don’t need advanced tricks to stand out. Focus on writing Python that is: ✔ reproducible ✔ configurable ✔ readable by someone else ✔ safe to run more than once That’s what moves your work closer to production. I’ll be shifting next into SQL, using the same practical, real-world lens. 👉 Follow along — more coming soon.
To view or add a comment, sign in
-
Technical post: I spent a few hours this weekend porting tfx to python 3.12.4. I've been on 3.10.14 for quite some time, and wondered how hard it would be. I worked with Gemini 3 Pro to guide me. Doing this required working with the following repos: ml metadata tfx-bsl tensorflow data validation tensorflow transform tensorflow model analysis tfx It involved working with bazel, setup.py, pyproject.toml, pytest and a lot of debugging build errors etc., but at this point I'm at 380 tests passing for components (3.10.14 has 385 passing tests for components). What surprised me was tfx-bsl, specifically, the lengths they've gone to optimize performance. Beam is doing the distributed processing, but tfx-bsl is doing its best to avoid python as much as possible, and using c++ in pyarrow for performance gains. Having ported that, and gotten a better view of tfx-bsl, I ported some of the protobuf/serialization code to tfx-bsl coders, getting rid of lots and lots of python. There's much more I can do, but some of it would be more invasive to the code base than I'd like. The cool part is to get a better view of the data lifecycle. It starts out schema less on disk, becomes NamedTuple objects after schema discovery, becomes RecordBatches from there, to serialized protocol buffers on disk. Wakes up again and becomes RecordBatches, goes through processes, sleeps again on disk. Then it wakes up as tensors ready for GPU. The compiled languages are really doing the heavy lifting, and python is just a decorative facade. Which brings me to my earlier point. What does it mean to port? When we go from python X to python Y, what happens? Why is it so hard? Well, the difficulty comes from not just porting python, which is easy, but porting the bindings with compiled languages. The binary shared libraries (the bsl in tfx bsl) and the protocol buffers and all the "multi-language" stuff. All this was out of reach like six months ago. Now, all of a sudden, it's possible, because these LLMs are paying dividends on the knowledge investments over the years. A pretty funny thing that happened while doing this, something wasn't working, and Gemini suggested we "trick" the system into working with a pretty creative band-aid. Turns out humans still need to supply some judgment, for which I'm thankful.
To view or add a comment, sign in
-
DAY 13: Reverse Integer [Leet Code] 🫡 Problem: Given a signed 32-bit integer x, return x with its digits reversed. If reversing x causes the value to go outside the signed 32-bit integer range [-231, 231 - 1], then return 0. Assume the environment does not allow you to store 64-bit integers (signed or unsigned). _________________________________________ Introduction In this article, we will explore a Python function that reverses the digits of a signed 32-bit integer. If the reversed integer exceeds the limits of a signed 32-bit integer, the function will return 0. This is a common problem that helps us understand integer manipulation and control flow in Python. Key Concepts Before diving into the code, let's clarify a few key concepts: 🔹 Signed 32-bit Integer: This is an integer that can represent values from -2,147,483,648 to 2,147,483,647. 🔹Reversing Digits: This involves taking an integer and flipping its digits. For example, reversing 123 gives 321, and reversing -123 gives -321. 🔹 Integer Overflow: This occurs when a calculation produces a value that is outside the range that can be represented with a given number of bits. Code: Bellow PDF Code Structure 🔹 The code is structured within a class called Solution, which contains a method ReverseInt. This method takes an integer as input and processes it to return the reversed integer or 0 if it exceeds the 32-bit integer range. Breakdown of the Code: 1. Constants: X_MIN and X_MAX define the limits for a signed 32-bit integer. 2. Sign Handling: The sign of the input integer is determined, and its absolute value is stored. 3. Reversal Logic: A while loop is used to extract digits from the integer and build the reversed integer. 4. Overflow Check: Before adding a new digit, the code checks if the new value would exceed the limits. Loop Steps Explained: 🔶 Initialization: The absolute value of the input number is stored in ab_s, and reversed_int is initialized to 0. First Iteration: ▪️ digit = ab_s % 10 gives the last digit (3). ▪️ ab_s //= 10 updates ab_s to 12. ▪️ reversed_int becomes 0 * 10 + 3 which is 3. Second Iteration: ▪️ digit = ab_s % 10 gives 2. ▪️ab_s //= 10 updates ab_s to 1. ▪️ reversed_int becomes 3 * 10 + 2 which is 32. Third Iteration: ▪️digit = ab_s % 10 gives 1. ▪️ab_s //= 10 updates ab_s to 0. ▪️reversed_int becomes 32 * 10 + 1 which is 321. Exit Loop: The loop exits as ab_s is now 0, and the function returns -321 (the original sign is applied). Conclusion The ReverseInt function effectively reverses the digits of a signed 32-bit integer while handling potential overflow issues. This exercise not only demonstrates basic arithmetic operations but also emphasizes the importance of checking for overflow in programming. Feel free to experiment with different integers to see how the function behaves! Happy coding!
To view or add a comment, sign in
-
Python with Machine Learning — Chapter 9 📘 Topic: Python Class 🔍 Today, we're diving into a core concept: the Python Class. Think of a class as a blueprint for creating objects. It helps us organize our code in a clean, reusable way—like a recipe for making cookies! 🍪 **Why it matters in real-world learning:** In machine learning and data science, classes help us structure complex models and data pipelines. They make our code modular and easier to debug. Learning this now builds a strong foundation for advanced topics later. You've got this! 💪 **Constructor: Your Object's First Step** A constructor is a special method inside a class that runs automatically when you create a new object. Its job is to set up the object's initial state—like adding ingredients when you bake a cookie. In Python, the constructor is always named `__init__`. Let's see a simple example: [CODE] class Cookie: def __init__(self, flavor, color): self.flavor = flavor # Attribute set by constructor self.color = color print(f"A new {self.color} {self.flavor} cookie is ready!") # Create a cookie object choco_cookie = Cookie("chocolate", "brown") [/CODE] Here, `__init__` takes parameters `flavor` and `color` and assigns them to the object's attributes using `self`. When we create `choco\_cookie`, the constructor runs and prints a welcome message. Key takeaway: Every class can have one `__init__` constructor to initialize objects. It's your go-to tool for setting up data. Practice this in your code! Try creating your own class. Share your thoughts or questions below—I'm here to guide you. 🚀 #Python #MachineLearning #Beginners #Coding
To view or add a comment, sign in
-
Today’s Python focus was 𝗗𝗶𝗰𝘁𝗶𝗼𝗻𝗮𝗿𝗶𝗲𝘀 and 𝗧𝘂𝗽𝗹𝗲𝘀. I spent time understanding how Python handles structured data using key value pairs and fixed collections, and how this differs from lists. 𝗪𝗵𝗮𝘁 𝗜 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝗱 𝘁𝗼𝗱𝗮𝘆: • Creating dictionaries to store related data using meaningful keys • Accessing values using keys and using get() to avoid runtime errors • Updating existing values and adding new key value pairs • Deleting entries and checking for key existence • Iterating through dictionaries using keys and items() • Extracting only keys and only values when needed • Working with nested dictionaries to represent structured data • Iterating through nested dictionaries for multi level data • Using dictionaries to model real examples like contact details and revenue by region 𝗞𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀: • Dictionaries store data as key value pairs, making lookups fast and clear • Dictionaries are mutable, so values can be updated without recreating the structure • get() is safer than direct key access when keys may not exist • Nested dictionaries are useful for representing hierarchical data • Iterating through dictionaries helps process structured datasets efficiently I also revisited 𝘁𝘂𝗽𝗹𝗲𝘀 conceptually and understood where they fit: • Tuples are ordered and immutable • They are useful when data should not change • Often used for fixed records, configuration values, or safe data grouping Working with dictionaries made it clear how real world data like contacts, configurations, and reports are represented in Python. If you are learning Python as well, which data structure are you currently focusing on? #Python #PythonLearning #DictionariesInPython #TuplesInPython #ProgrammingBasics #LearningInPublic #DataAnalytics #Upskilling
To view or add a comment, sign in
-
Python Fundamentals Part 5: Dictionaries Just published: Building a movie showtime lookup system! What you'll learn: Dictionaries (key-value pairs) The .get() method explained Safe lookups without errors When to use dictionaries vs lists Read here: https://lnkd.in/gnby3V9G Part of my Python basics revision journey. Perfect for beginners learning data structures! #Python #Programming #LearnToCode #DataStructure
To view or add a comment, sign in
-
🚨 Most Python performance bugs start with ONE mistake… strings. Python strings aren’t “just text”. They’re immutable, Unicode-first, and performance-critical — and treating them casually can quietly tank your app ⚠️ Here’s the architect’s view of Python strings 👇 🔹 Immutability = reliability Thread-safe, hashable, and memory-efficient by design 🔹 Indexing & slicing = precision tools Zero-based, negative indexing, safe slicing (no crashes) 🔹 ❌ + in loops = O(n²) trap ✅ Use list.append() + "".join() for linear performance 🔹 f-strings = modern default Cleaner, faster, safer than % or .format() 🔹 .translate() & .casefold() = pro-level cleaning Built for real-world data, not toy examples 🔹 Interning & Unicode normalization = scale readiness Pointer comparisons + global text consistency If your system touches APIs, logs, CSVs, NLP, or user input, 👉 string mastery is non-negotiable 💡 🔥 Hot take: If your service slows down over time, check your string concatenation first. 📖 For a deep architectural breakdown with examples and benchmarks, check the full article below 👇 https://lnkd.in/gzSryfu4 #Python 🐍 #BackendDevelopment ⚙️ #FastAPI 🚀 #SystemDesign 🧠 #SoftwareEngineering 💻 #PerformanceOptimization ⚡ #CleanCode ✨ #Unicode 🌍 #DeveloperGrowth 📈 #PythonTips 🔍 #TechCareers 👨💻👩💻
To view or add a comment, sign in
-
The Hidden Cost of Python Dictionaries (And 3 Safer Alternatives) This article compares and contrasts: dictionaries, named tuples, dataclasses and Pydantic. #python #pythonhowtos #programming #pydantic #pythondataclasses #pythondicts https://lnkd.in/e2NqHfCB
To view or add a comment, sign in
-
This post explains something many people miss: Python isn’t meant to do the heavy lifting — it’s meant to orchestrate it. In real-world systems, Python acts as the glue that connects high-performance engines written in C/C++, Rust, and CUDA. From NumPy and PyTorch to FastAPI and modern validation frameworks, the actual compute runs where it’s fastest. Calling Python “slow” without understanding where execution happens is like blaming the steering wheel for a car’s speed. If you’re building production-grade systems, this distinction is critical. Performance is about architecture, not just language choice. 💡 Great breakdown — worth the read and discussion. #Python #SoftwareArchitecture #AIEngineering #MachineLearning #BackendDevelopment #TechInsights
Chief Executive Officer @ Convolity AI | Developing Explainable AI Solutions | CTO @ Biotech Wallah Pvt. Ltd. Developing Early Alzheimer’s Detection | B2B
“Python is slow.” That statement is both true… and completely misleading. Let me explain with a real-world example 👇 When people say Python is slow, they’re usually thinking about CPU-heavy loops written in pure Python. Yes, if you try to run millions of mathematical operations inside a Python for loop, it will be slower than C or Rust. But that’s not how Python is used in real systems. In production, Python rarely does the heavy lifting. Python is the orchestrator. For example: • When you use NumPy, the math runs in C / Fortran • When you train models in PyTorch or TensorFlow, the compute runs in C++ + CUDA on GPUs • When you build APIs with FastAPI, the event loop is C-based • Even Pydantic v2 moved its core validation to Rust So what does Python actually do? It coordinates It routes data It glues high-performance systems together Think of it like this: Python is the steering wheel. C / C++ / Rust / GPU is the engine. We don’t blame the steering wheel for the car’s top speed. The real takeaway: Python is slow only when misused. Python-based systems are fast by design. That’s why companies like Netflix, Google, Meta, and OpenAI run massive systems powered by Python. The question isn’t “Is Python slow?” The real question is “Where does the work actually execute?” If you’re building serious systems, this distinction matters. What’s been your experience using Python in production?
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development