I ran `kill -9` on a Python worker processing three tasks. They vanished — no error, no retry, no record. This is the default behavior of most task frameworks: a worker dies mid-execution, and the work disappears. So I built automatic crash recovery into pynenc, an open-source distributed task orchestration framework for Python. Here's what it does: • Every runner emits periodic heartbeats • When heartbeats stop, the recovery service detects the dead runner • Orphaned tasks are automatically re-queued • A healthy runner picks them up and finishes the job No external monitoring. No manual re-queueing scripts. No lost work. I wrote up the full scenario — including a runnable demo you can try locally with zero dependencies (no Docker, no Redis): https://lnkd.in/ehWVK-3p The demo takes about 90 seconds and shows recovery happening end-to-end. How does your team handle crashed workers today? #python #distributedsystems #opensource #backend #reliability
Crash Recovery in Pynenc Task Orchestration Framework
More Relevant Posts
-
Before hierarchies, before class names — what is Python’s exception system actually trying to do? It’s answering one question: 👉 When something goes wrong, who is responsible for handling it? Without exceptions, your program would just crash. No recovery, no fallback, no control. With exceptions, Python essentially asks: “Something failed here — does anyone know how to handle this?” Then it walks up the call stack looking for an answer. The hierarchy exists because not all problems are equal. Dividing by zero ≠ missing file Missing file ≠ out of memory Out of memory ≠ user hitting Ctrl+C You don’t want to treat all of these the same. Sometimes you want: → “Handle file-related issues only” → “Ignore user interruptions” → “Catch anything and log it” Writing separate logic for every possible error would be chaos. So Python solves this with inheritance. Catch a parent exception, and you automatically catch everything under it. The mental model that clicked for me: Exceptions are a taxonomy of problems. Just like: “animal” includes dogs, cats, birds… Catching a higher-level exception means: you’re choosing to handle an entire category of failures. Big takeaway: Exceptions aren’t just error messages — they’re a structured way to delegate responsibility in your program. And once you see that, try/except stops feeling like syntax… and starts feeling like design. What mental model helped you understand exceptions better?
To view or add a comment, sign in
-
-
🚀 Python GIL vs No-GIL — Real FastAPI Benchmarks (Python 3.13) Free-threaded Python is no longer just an experiment — it’s starting to show real impact. I came across a benchmark comparing Python 3.12 (with GIL) vs Python 3.13t (No-GIL) using FastAPI, and the results are pretty interesting 👇 💡 Key Takeaways: 🔹 Massive CPU Boost (~8x) CPU-bound endpoints jumped from ~4 RPS to ~32 RPS — with ZERO code changes. This is what true parallelism across cores looks like. 🔹 Threading inside requests ≠ better performance Even without GIL, spawning threads inside a single request didn’t help. Why? Under load, request-level parallelism already saturates the CPU. Extra threads just add overhead. 🔹 I/O performance unchanged No surprise here — GIL was never the bottleneck for I/O-bound workloads. Async + I/O still behaves the same. 📊 What this means in practice: ✅ Use No-GIL Python when: - You have CPU-heavy APIs (ML inference, image processing, data pipelines) - High concurrency + CPU contention exists - You previously relied on multiprocessing to bypass GIL ❌ Don’t expect gains if: - Your app is mostly I/O (DB calls, HTTP requests) - You’re already using async effectively ⚠️ Things to keep in mind: - Free-threading is still evolving - Thread safety is now YOUR responsibility - Some C extensions may not be ready yet 🔥 The most exciting part? Same code. Same FastAPI app. Just a different Python runtime → 8x improvement. This could seriously change how we design backend systems in Python. Curious — would you switch to No-GIL Python for your APIs? #Python #FastAPI #BackendEngineering #Performance #Concurrency #AI #SoftwareEngineering
To view or add a comment, sign in
-
-
I just published a new article on a problem every Python developer eventually faces: dependency hell. After breaking my environment one too many times, I decided to rethink my workflow and design a clean architecture using Conda + Spyder. The idea is simple: isolate everything. This approach helped me eliminate conflicts, improve reproducibility, and work more efficiently on my projects. If you’ve ever lost hours trying to fix a broken environment, this might help. #Python #MachineLearning #DataScience #SoftwareEngineering #Productivity
To view or add a comment, sign in
-
The Python ecosystem's insistence on solving multiple problems when distributing functions has led to unnecessary complexity. The dominant frameworks have fused orchestration into the execution layer, imposing constraints on function shape, argument serialization, control flow, and error handling. Wool takes a different approach by allowing execution to be distributed without the need for DAG definitions, checkpointing, or retry logic, focusing on simplicity and transparency. Wool provides distributed coroutines and async generators that enable transparent execution on remote worker processes while maintaining the same semantics as local execution. https://lnkd.in/eJ97fuAp --- More tech like this—join us 👉 https://faun.dev/join
To view or add a comment, sign in
-
Python: @staticmethod vs @classmethod (Explained Simply) In Python classes, not all methods behave the same. There are 3 types of methods: 1) Instance Method: Works with object data. def show_name(self): • Uses self. • Accesses instance variables. 2) Class Method (@classmethod): Works with class-level data. @classmethod • Uses cls. • Can modify class variables. • Shared across all objects. 3) Static Method (@staticmethod): Independent utility function. @staticmethod • No self, no cls. • Doesn’t modify class or instance. • Used for helper logic. In this example: • show_name() → works on object. • change_company() → updates company for all employees. • greet() → simple helper function. Think of it like this: - Instance → works with object. - Class → works with class. - Static → works independently. Comment down, Which one do you use most in your code?
To view or add a comment, sign in
-
-
🚀 Python’s Concurrency Era Is Changing — Are You Ready? For decades, the Global Interpreter Lock (GIL) has been one of Python’s most debated design choices. With Python 3.12, the GIL is still very much part of the runtime. But Python 3.13 introduces something that could reshape how we think about Python performance: an *optional* GIL-free experiment. Let that sink in. This isn’t just a version upgrade — it’s a philosophical shift. 🔍 What’s actually happening? Python 3.12: Continues with the traditional GIL model — predictable, stable, and battle-tested. Python 3.13: Introduces an experimental no-GIL build, allowing true parallel execution of threads. 💡 Why this matters For years, Python developers have worked around the GIL using multiprocessing, async programming, or offloading to C extensions. Now, Python is exploring a future where those workarounds may not always be necessary. ⚖️ Pros of a GIL-free Python (3.13 experimental) ✅ True Multithreading CPU-bound tasks can finally run in parallel without jumping through hoops. ✅ Simpler Mental Model (in some cases) Less need to decide between threads vs processes for performance. ✅ Better Hardware Utilization Modern multi-core systems can be leveraged more effectively. ⚠️ Cons & Trade-offs ❌ Performance Overhead Removing the GIL introduces complexity — single-threaded performance may take a hit. ❌ Ecosystem Compatibility Many existing libraries assume the presence of the GIL. Transition won’t be instant. ❌ New Class of Bugs Race conditions and synchronization issues will become more common for Python developers. 🧠 The Bigger Insight This is not about “GIL = bad” or “No GIL = good.” It’s about *choice*. Python is evolving from a one-size-fits-all runtime into a more flexible platform that acknowledges diverse workloads — from scripting to high-performance computing. 📌 What should you do as a developer? * Don’t rush to rewrite everything. The no-GIL version is still experimental. * Start understanding concurrency deeply — the future will reward it. * Keep an eye on library support and benchmarks before adopting. The GIL debate isn’t ending — it’s entering its most interesting phase yet. #Python #SoftwareEngineering #Concurrency #TechTrends #Programming #Threading
To view or add a comment, sign in
-
-
The Python HTTP client space is in a weird place right now. requests is everywhere, but it is 10+ years old. httpx looked like the future for async, but development has slowed down and the direction is not that clear anymore. So teams either: stick to something outdated or build workarounds around limitations Zapros is one of the projects trying to rethink this layer. What is interesting is not the library itself, but the idea behind it. Instead of tying the client to a specific transport implementation, it separates the layers and builds everything around abstractions. That opens things like: - switching transport without rewriting the client logic - composing behavior through middlewares (retries, caching, etc.) - supporting both sync and async in a cleaner way Will it replace existing tools? Hard to say. Still, it is a good signal that people are trying to rethink this layer, not just patch it. And this is usually where bigger architectural shifts start. Curious what others use today for HTTP in Python. Still requests? Moved to httpx? Or something else entirely?
To view or add a comment, sign in
-
Some python list tutorials stop at my_list.append(x). That is the surface. Underneath, a list is a C struct called PyListObject holding an array of pointers to PyObject instances. The list does not store your data. It stores references to wherever your data lives on the heap. That single fact is the root cause of the aliasing bugs that catch developers off guard. A few things that land differently once you understand the memory model: Why append() is O(1) amortized. CPython over-allocates on resize using the growth sequence 0, 4, 8, 16, 24, 32, 40, 52, 64, 76... so the O(n) copy cost spreads across many appends. Why b = a and then mutating b also mutates a. They are two names pointing at the same PyListObject. Why list.sort() runs in O(n) on nearly-sorted data. Timsort, written by Tim Peters in 2002, finds already-sorted runs and merges them. Stability has been a documented guarantee since Python 2.2. Why list.pop() from the end is O(1) but list.pop(0) is O(n). Elements after the index have to shift. I put together an 11-tutorial learning path on PythonCodeCrack that walks through lists from first principles through the copy semantics and aliasing patterns that cause hard-to-trace bugs. Fundamentals first (creation, slicing, append vs extend, sorting, comprehensions), then the advanced group (flattening, shallow vs deep copy, why your list keeps changing unexpectedly). https://lnkd.in/g5uUXj6d #Python #SoftwareEngineering #CPython #Programming
To view or add a comment, sign in
-
If you try to scale concurrency in Python like Go… your system will slow down before it scales. This isn’t about which language is better. It’s about how each language was designed to handle concurrency. And that difference shows up the moment your backend starts handling real traffic. Let’s start with Python. Python supports concurrency through: - threads - async (asyncio) But there’s a fundamental limitation: - The Global Interpreter Lock (GIL). The GIL allows only one thread to execute Python bytecode at a time. So even if you create multiple threads: they don’t truly run in parallel (for CPU work) they take turns executing This makes concurrency in Python: harder to scale for CPU-heavy tasks dependent on workarounds like multiprocessing more complex to reason about in real systems Golang was built with concurrency at its core. Instead of threads, it uses goroutines: lightweight cheap to create managed by the Go runtime You can run thousands — even millions — of concurrent tasks without worrying about system overhead. With Go: - concurrent code looks like normal code - channels make communication explicit - timeouts and cancellations are built-in patterns - concurrency is easier to reason about at scale Then comes the scheduler. - Go uses an M:N scheduler: many goroutines mapped to a few OS threads This allows Go to: utilize multiple CPU cores efficiently switch tasks quickly handle high-load systems predictably Python, because of the GIL, doesn’t achieve this without spawning multiple processes. Go makes it easier to build: high-concurrency APIs scalable backend systems predictable distributed services. Python excels at: rapid development AI/ML workloads flexibility #Golang #Python #BackendDevelopment #SystemDesign #Concurrency #SoftwareEngineering #DistributedSystems
To view or add a comment, sign in
-
Build a RAG pipeline from scratch in Python without LangChain Learn to build a RAG pipeline in Python from scratch — chunk documents, embed with OpenAI, store in ChromaDB, and query with Claude. Read the full post 👇 https://lnkd.in/gBHKATvy #GenerativeAI #AI #WebDevelopment #PHP #Python #Developer #LLM
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development