I wasted 3 hours thinking my code was broken. It wasn't. The GIL was just laughing at me. I had this Python script doing heavy data processing. Added multi-threading to speed it up. Ran it. Expected magic. Got... the exact same speed. Maybe even slower. I checked my logic. Checked my threads. Googled everything. Then finally someone in a forum said — "Bro, that's just the GIL." So what even IS the GIL? In simple words — Python has this rule that says only one thread can run at a time, no matter how many cores your machine has. It's called the Global Interpreter Lock. It exists to keep Python's memory safe, but it also means your threads aren't actually running in parallel when doing heavy computation. I felt cheated honestly. But here's what actually helped me: → Switched to multiprocessing — each process has its own GIL. Problem solved. → For API calls and I/O stuff? asyncio is your best friend. → Libraries like NumPy actually release the GIL during computation. Smart. → Python 3.13 is experimenting with making the GIL optional. The future looks good. The GIL isn't evil. It's just something nobody tells you about when you're starting out — and you only discover it when you're sitting there confused at 2am wondering why your "optimized" code is still slow. If this saves even one person that 3-hour spiral, this post was worth it. Have you hit the GIL wall before? What did you do? Let me know below #Python #PythonDeveloper #Programming #LearnPython #SoftwareEngineering #CodingLife #TechCommunity
Overcoming the Python GIL: A 3-Hour Lesson
More Relevant Posts
-
Day 6 was the most hands-on day yet. I stopped looking at Python as a collection of rules and started using it as a high-powered filter for data. Here is how Day 6 changed my perspective on Algorithms and Strings: 🔹 The Accumulator Pattern: I learned how to make a loop remember things. Whether it’s counting occurrences, summing up values, or finding the average, it’s all about maintaining a state while the loop churns through data. 🔹 The Search Party: I built logic to find the largest and smallest values in a set. Realizing that Smallest is tricky—you have to be careful with how you initialize your variables, or your starting "zero" might accidentally become your answer. 🔹 Strings are Collections: I used to think of a word as just "text." Now I see it as a sequence. I’ve learned to Slice strings to grab exactly what I need, Strip away the "noise" (whitespace), and use Parsing to extract specific data from a messy block of text. 🔹 The "In" Operator: Python’s readability shines here. Using if 'search_term' in text: feels like writing English, but it’s actually a powerful logical tool for filtering information instantly. Next up: File Handling. I’m moving from typing data manually into the console to letting Python read and analyze entire documents for me. 📂 #Python #DataAnalysis #CodingJourney #BuildInPublic #SoftwareLogic #Algorithms #StringManipulation
To view or add a comment, sign in
-
-
🚀 Day 49 of #100DaysOfCode LeetCode #83 – Remove Duplicates from Sorted List Today I solved a classic Linked List problem: 👉 Given the head of a sorted linked list, delete all duplicates such that each element appears only once. 👉 Return the linked list sorted as well. Since the list is already sorted, duplicates will always appear next to each other — which makes the solution efficient and elegant. 💡 Key Insight: Instead of using extra space (like a set), we can simply: Traverse the linked list Compare the current node with the next node Skip the next node if it’s a duplicate 🧠 Python Implementation: class ListNode: def __init__(self, val=0, next=None): self.val = val self.next = next def deleteDuplicates(head): current = head while current and current.next: if current.val == current.next.val: current.next = current.next.next else: current = current.next return head ✅ Why This Works: Time Complexity: O(n) Space Complexity: O(1) (no extra memory used) Efficient because the list is already sorted Problems like this strengthen understanding of: ✔️ Linked List traversal ✔️ Pointer manipulation ✔️ Writing clean and optimal code Consistency in solving small problems builds strong fundamentals. 💪 #LeetCode #Python #DataStructures #LinkedList #ProblemSolving #CodingJourney
To view or add a comment, sign in
-
-
Day 3 of my Python journey, and I’ve officially hit my first "ego check." Yesterday, I thought I had a simple Even/Odd script figured out. I was wrong. It turns out the computer doesn't care what I meant; it only cares what I wrote. ❌ Moving into Day 3, I’ve been studying the "Laws of the Python Universe" to stop guessing and start structuring: 🔹 The Hierarchy of Operations: I learned that 1 + 2**3 / 4 * 5 isn’t just a string of numbers—it’s a sequence. Python doesn't just read left-to-right; it respects a hierarchy (PEMDAS). If you ignore the order of operations, your data is junk before you even hit 'Enter.' 🔹 The "Wait" State: I built a simple script to convert European floor numbers to US floors. It’s a basic + 1 calculation, but it taught me about "Blocking Calls"—how the program literally pauses its entire existence to wait for the user to provide data. The biggest takeaway? Coding isn't about memorizing syntax; it's about debugging my own thought process. I’m learning that "clean code" starts with a "clear mind." Day 4 is up next. Let’s see if I can outsmart the compiler tomorrow. 🐍 #Python #LearningToCode #BuildInPublic #SoftwareLogic #TechJourney #DataScience
To view or add a comment, sign in
-
-
Day 18: Scope and Precision — The Limits of Logic 🌐 As your programs grow, you'll start having variables with the same names in different places. How does Python know which one to use? And when doing math, how many decimals can Python actually "remember"? 1. Local vs. Global Scope Think of Scope as the "area of visibility" for a variable. Global Scope: Variables defined at the top level (outside any function). They can be read from anywhere in your script. Local Scope: Variables defined inside a function. They only exist while that function is running. Once the function ends, the variable is deleted. 💡 The Engineering Lens: Avoid using too many Global variables. If every function can change a variable, it becomes a nightmare to track down bugs. Keep data "Local" whenever possible! 2. The LEGB Rule: Python’s Search Engine When you call a variable name, Python searches in a very specific order to find it. This is the LEGB rule: Local: Inside the current function. Enclosing: Inside any nested "parent" functions. Global: At the top level of the file. Built-in: Python’s pre-installed names (like len or print). 3. Precision: The Decimal Limit When you use a Float (a decimal number), Python has to fit that number into a fixed amount of memory. Maximum Precision: Python floats are typically "double-precision" (64-bit). This means they can hold about 15 to 17 significant decimal digits. The Default: When you perform a calculation, Python will show as many decimals as are relevant, but it stops being accurate after that 15–17 digit mark. 💡 The Engineering Lens: Because of this limit, 0.1 + 0.2 often equals 0.30000000000000004. If you are building a banking app or a scientific tool where you need infinite precision, don't use floats! Use Python’s decimal module instead. #Python #SoftwareEngineering #CleanCode #ProgrammingTips #DataPrecision #LearnToCode #TechCommunity #PythonDev
To view or add a comment, sign in
-
🚀Day 12 of #75DaysofLeetcode LeetCode #11 – Container With Most Water (Medium) Today I solved the classic Two Pointer problem: Container With Most Water. 🔎 Problem Summary: Given an array height, each element represents a vertical line. The goal is to choose two lines such that together with the x-axis they form a container that holds the maximum amount of water. 💡 Key Insight: Instead of checking all pairs (O(n²)), we can use the Two Pointer Technique: ✔ Start with one pointer at the beginning and one at the end ✔ Calculate the container area using area = min(height[left], height[right]) * (right - left) ✔ Move the pointer pointing to the smaller height ✔ Keep track of the maximum area ⚡ Optimal Complexity: ⏱ Time Complexity: O(n) 📦 Space Complexity: O(1) 💻 Python Implementation: from typing import List class Solution: def maxArea(self, height: List[int]) -> int: left, right = 0, len(height) - 1 max_water = 0 while left < right: area = min(height[left], height[right]) * (right - left) max_water = max(max_water, area) if height[left] < height[right]: left += 1 else: right -= 1 return max_water 📚 This problem reinforced how two pointers can reduce a brute force problem from O(n²) to O(n). Consistency in solving problems daily is slowly improving my problem-solving skills and algorithmic thinking. 💪 #LeetCode #DSA #Python #TwoPointers #ProblemSolving #CodingPractice #100DaysOfCode
To view or add a comment, sign in
-
-
𝐖𝐡𝐲 𝐱 𝐢𝐧 𝐬𝐞𝐭 𝐟𝐞𝐞𝐥𝐬 𝐢𝐧𝐬𝐭𝐚𝐧𝐭… 𝐛𝐮𝐭 𝐱 𝐢𝐧 𝐥𝐢𝐬𝐭 𝐝𝐨𝐞𝐬𝐧’𝐭 👇 𝑻𝒘𝒐 𝒍𝒊𝒏𝒆𝒔 𝒕𝒉𝒂𝒕 𝒍𝒐𝒐𝒌 𝒕𝒉𝒆 𝒔𝒂𝒎𝒆: ---> x in my_list ---> x in my_set Both answer a simple question: “Is x present?” But their time complexity is very different. 🧾 𝐋𝐢𝐬𝐭𝐬 → 𝐥𝐢𝐧𝐞𝐚𝐫 𝐬𝐞𝐚𝐫𝐜𝐡 (𝐎(𝐧)) 𝑨 𝒍𝒊𝒔𝒕 𝒊𝒔 𝒍𝒊𝒌𝒆 𝒄𝒉𝒆𝒄𝒌𝒊𝒏𝒈 𝒂 𝑵𝒐𝒕𝒆𝒃𝒐𝒐𝒌: You start at page 1… then page 2… then page 3… Worst case: you check every element. nums = [3, 8, 2, 10, 7] 7 in nums # Python checks one-by-one So membership in a list is 𝑶(𝒏) — time grows with size. ⚡ 𝐒𝐞𝐭𝐬 → 𝐡𝐚𝐬𝐡𝐢𝐧𝐠 (𝐚𝐯𝐞𝐫𝐚𝐠𝐞 𝐎(1)) 𝑨 𝒔𝒆𝒕 𝒊𝒔 𝒎𝒐𝒓𝒆 𝒍𝒊𝒌𝒆 𝒂 𝒍𝒐𝒄𝒌𝒆𝒓 𝒔𝒚𝒔𝒕𝒆𝒎. Instead of scanning all elements, Python: Computes a hash of the value Jumps directly to its bucket nums = {3, 8, 2, 10, 7} 7 in nums # direct lookup No scanning. Just direct access → 𝐚𝐯𝐞𝐫𝐚𝐠𝐞 𝐎(1). 𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐝𝐞𝐭𝐚𝐢𝐥 Set lookup is O(1) on average, not magic. If many values collide into the same bucket (rare but possible), it can degrade — but Python handles hashing well, so in practice it stays near constant time. 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐚𝐥 𝐭𝐚𝐤𝐞𝐚𝐰𝐚𝐲 Use a list when: -order matters -duplicates matter -small dataset Use a set when: -you need fast membership checks -uniqueness matters -solving frequency / existence problems 𝑾𝒉𝒂𝒕 𝒔𝒎𝒂𝒍𝒍 𝑷𝒚𝒕𝒉𝒐𝒏 𝒅𝒆𝒕𝒂𝒊𝒍 𝒔𝒂𝒗𝒆𝒅 𝒚𝒐𝒖 𝒕𝒉𝒆 𝒎𝒐𝒔𝒕 𝒅𝒆𝒃𝒖𝒈𝒈𝒊𝒏𝒈 𝒕𝒊𝒎𝒆? #Python #DataStructures #Algorithms #DSA #CodingTips #LearningInPublic #YogeshLearns
To view or add a comment, sign in
-
Weekly challenge 5: Recursion To understand recursion, you must first understand recursion. For Week 5 of my algorithm challenge, I decided to tackle a concept that trips up many beginners: Recursive Functions, using the classic Factorial problem. What is Recursion? Instead of using a standard `for` or `while` loop, a recursive function calls **itself** to solve a smaller piece of the problem. It keeps digging deeper until it hits a "Base Case" (the bottom), and then it passes the answers back up the chain. Think of it like a set of Russian nesting dolls. The Trade-off:** > While recursive code is extremely clean and mathematical, it uses more memory. Every time the function calls itself, it adds a new layer to the computer's **Call Stack**. If you forget your Base Case, your program crashes with a "Stack Overflow"! > > I added a visual trace to my Python script so you can literally see the Call Stack growing and shrinking in the console. Check the full code and console output on GitHub: https://lnkd.in/es5TzCUg #Python #Recursion #Algorithms #CodingChallenge #SoftwareEngineering #DataScience
To view or add a comment, sign in
-
Day 4 of 35 — File Handling & Exceptions Day 4 done! Today I learned how Python talks to the outside world — files, errors, and handling the unexpected. Here's what I covered in 1.5 hours: ✅ Reading & Writing files — the right way using with open() ✅ Working with CSV files — reading and writing structured data ✅ try / except / finally — handling errors gracefully ✅ Custom Exceptions — raising meaningful errors in your own code Biggest insight today: 👉 Always use with open() instead of manually calling f.close() Here's why: ```python # ❌ Risky — file stays open if error occurs f = open("notes.txt", "r") data = f.read() f.close() # ✅ Safe — file auto-closes even if error occurs with open("notes.txt", "r") as f: data = f.read() ``` Also built a mini project today — read a CSV of student scores, found the highest scorer, and saved the result to a new file. Felt like real-world Python! Day 4 ✅ #Python #FileHandling #100DayChallenge #LearningInPublic #CodingJourney #PythonProgramming #SoftwareEngineering #ExceptionHandling
To view or add a comment, sign in
-
-
🚀 Day 11/30 | LeetCode Problem: Merge Two Sorted Lists (21) Problem: You are given the heads of two sorted linked lists list1 and list2. Merge the two lists into one sorted linked list and return its head. 💡 Approach (Recursive) Since both lists are already sorted: If one list is empty → return the other. Compare the current values of both lists. Attach the smaller node to the result. Recursively merge the remaining nodes. This keeps the final list sorted automatically. ⏱ Complexity Time Complexity: O(n + m) Space Complexity: O(n + m) (due to recursion stack) 🧠 Python Code class Solution: def mergeTwoLists(self, list1, list2): if not list1: return list2 if not list2: return list1 if list1.val < list2.val: list1.next = self.mergeTwoLists(list1.next, list2) return list1 else: list2.next = self.mergeTwoLists(list1, list2.next) return list2 📌 Example Input: list1 = [1,2,4] list2 = [1,3,4] Output: [1,1,2,3,4,4] 🎯 Key Takeaway When working with sorted data structures, compare and attach is a powerful pattern. Also, recursion makes linked list problems elegant and clean. ✅ Accepted 🔖 Hashtags #LeetCode #30DaysOfLeetCode #Day11 #Python #LinkedList #Recursion #DataStructures #Algorithms #ProblemSolving #CodingJourney
To view or add a comment, sign in
-
-
𝐍𝐮𝐦𝐏𝐲 𝐁𝐨𝐨𝐥𝐞𝐚𝐧 𝐓𝐫𝐚𝐩 I’m still at the very beginning of my Python journey. But even with my tiny amount of experience, I already hit a subtle NumPy trap that can easily sneak into real code. Python is full of surprises — even at the very beginning It happens when you create an untyped NumPy array and fill it with a function that should return booleans… …but sometimes returns 𝙉𝙤𝙣𝙚 when processing fails. At first, you expect a clean boolean array — because the function normally returns 𝙏𝙧𝙪𝙚 or 𝙁𝙖𝙡𝙨𝙚. But NumPy has other plans. Here’s the trap 👇 🟥 𝟏) 𝐔𝐧𝐭𝐲𝐩𝐞𝐝 𝐚𝐫𝐫𝐚𝐲 + 𝐚 𝐟𝐮𝐧𝐜𝐭𝐢𝐨𝐧 𝐭𝐡𝐚𝐭 “𝐬𝐡𝐨𝐮𝐥𝐝” 𝐫𝐞𝐭𝐮𝐫𝐧 𝐛𝐨𝐨𝐥𝐞𝐚𝐧𝐬 𝒂𝒓𝒓 = 𝒏𝒑.𝒆𝒎𝒑𝒕𝒚(10) # 𝒏𝒐 𝒅𝒕𝒚𝒑𝒆 𝒂𝒓𝒓[𝒊] = 𝒎𝒚_𝒇𝒖𝒏𝒄() # 𝑻𝒓𝒖𝒆 / 𝑭𝒂𝒍𝒔𝒆 ... 𝒐𝒓 𝑵𝒐𝒏𝒆 You expect a clean boolean array because the function usually returns 𝙏𝙧𝙪𝙚/𝙁𝙖𝙡𝙨𝙚. But if even one value is 𝙉𝙤𝙣𝙚, NumPy must pick a type that can hold all values. 🟦 𝟐) 𝐍𝐮𝐦𝐏𝐲 𝐬𝐢𝐥𝐞𝐧𝐭𝐥𝐲 𝐬𝐰𝐢𝐭𝐜𝐡𝐞𝐬 𝐭𝐨 𝐝𝐭𝐲𝐩𝐞=𝐨𝐛𝐣𝐞𝐜𝐭 𝒂𝒓𝒓𝒂𝒚([𝑻𝒓𝒖𝒆, 𝑭𝒂𝒍𝒔𝒆, 𝑵𝒐𝒏𝒆, ...], 𝒅𝒕𝒚𝒑𝒆=𝒐𝒃𝒋𝒆𝒄𝒕) Impact: no vectorization logical operations break masks behave unpredictably performance collapses You think you have a NumPy boolean array. You actually have a Python object array. 🟩 𝟑) 𝐓𝐡𝐞 𝐬𝐢𝐥𝐞𝐧𝐭 𝐜𝐨𝐧𝐯𝐞𝐫𝐬𝐢𝐨𝐧 𝐭𝐫𝐚𝐩 Trying to fix it: 𝒂𝒓𝒓 = 𝒏𝒑.𝒆𝒎𝒑𝒕𝒚(10, 𝒅𝒕𝒚𝒑𝒆=𝒃𝒐𝒐𝒍) 𝒂𝒓𝒓[𝒊] = 𝒎𝒚_𝒇𝒖𝒏𝒄() NumPy converts: 𝑵𝒐𝒏𝒆 → 𝑭𝒂𝒍𝒔𝒆 (silently) Impact: 👉you lose the meaning of “no result” 👉your data becomes wrong 👉the bug becomes invisible ⭐ 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲 Same code. Same function. Two completely different arrays. NumPy’s dtype inference can hide subtle bugs — and I found this one with almost no Python experience. 𝐂𝐮𝐫𝐢𝐨𝐮𝐬 𝐭𝐨 𝐤𝐧𝐨𝐰: 👉 Have you ever run into this behavior? 👉 Or another NumPy dtype surprise? #python #numpy #datascience #cleanCode #devTips #programming
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development