One thing many Rust devs miss (until production): **dropping a Future is a *control-flow path***. In async Rust, cancellation usually happens by **Drop**. That means whenever you use `tokio::select!`, timeouts, or early returns, the “losing” branch gets dropped — and any cleanup/side-effects in `Drop` will run. Why it matters: - If you hold a `MutexGuard`/`RwLockWriteGuard` across an `.await`, cancellation can drop the guard at an unexpected point. - If your type’s `Drop` does I/O, logging, or releases external resources, cancellation becomes observable behavior. - “Looks correct” code can still be **not cancellation-safe**. Practical rule of thumb: - Keep critical sections **small** (don’t hold locks across `.await`). - Treat `Drop` like a finally-block: it will run on *success, error, and cancellation*. - For cleanup, prefer explicit scopes (or `scopeguard`) and design APIs that are cancellation-safe by default. This is one of the reasons Rust async feels so reliable: the compiler helps — but you still need to model cancellation. #rust #softwareengineering #async
Dropping Futures in Rust: A Control-Flow Path to Consider
More Relevant Posts
-
🚀 Excited to share my latest project: Building a Multi-Producer Multi-Consumer Queue in Rust! I've recently completed building a bounded MPMC queue implementation in Rust using Mutex + Condvar primitives. Here's what I worked on: 🔧 What I Built: A thread-safe, production-ready queue that supports: ✅ Blocking push/pop operations for synchronized data flow ✅ Non-blocking try_push/try_pop for async-friendly workflows ✅ Bounded capacity to prevent unbounded memory growth ✅ Full thread-safety guarantees with proper synchronization primitives 💡 Why This Project: Understanding concurrent data structures is fundamental in systems programming. Building this queue from scratch gave me hands-on experience with: Rust's ownership model and thread-safety guarantees Synchronization primitives (Mutex & Condvar) and their real-world applications Designing APIs that balance performance with ease of use Testing concurrent systems for race conditions and deadlocks 📚 Key Learnings: Concurrency is Hard - The devil is in the details. Proper synchronization requires careful thinking about lock ordering and condition signaling. Rust's Type System Prevents Bugs - Rust's compiler catches thread-safety issues at compile time, not runtime. Performance Matters - Implemented both blocking and non-blocking variants to optimize for different use cases. Testing is Critical - Concurrent code needs comprehensive testing with stress tests and benchmarks to validate correctness. 📊 Tools & Tech Stack: Rust 🦀 | Cargo | Thread Safety Primitives | Systems Design Open to collaboration and feedback! Check it out on GitHub: mpmc-queue https://lnkd.in/g9qAJFF8 Crate - https://lnkd.in/gZtK88Tz #Rust #SystemsEngineering #Concurrency #OpenSource #SoftwareDevelopment #LearnInPublic
To view or add a comment, sign in
-
-
Iterators in Rust are zero-cost abstractions - essentially you could use the high-level functional constructs like map, filter, and sum without any additional runtime overhead. I ran a simple test - iterating a slice - to compare both. Both versions - iterators and handwritten loops - compile down to identical machine code. For an x86_64 target with release optimization (-O3), the compiler uses SIMD with loop unrolling for both. Instead of processing one number at a time, it processes 4 numbers simultaneously by utilising multiple 128-bit registers in parallel. To quote the official Rust-lang book: `Now that you know this, you can use iterators and closures without fear! They make code seem like it’s higher level but don’t impose a runtime performance penalty for doing so.` Side-by-side comparison on Compiler Explorer : Iterator version : `https://lnkd.in/gVSRkcpf Handwritten loop : `https://lnkd.in/gUVfqUy2 #RustLang
To view or add a comment, sign in
-
ASYNC/AWAIT: WHAT ACTUALLY HAPPENS BEHIND THE SCENES Async/await looks simple, but the underlying mechanics are often misunderstood. When you use await, the compiler transforms your method into a state machine. Instead of blocking threads, it schedules continuations on the thread pool. This improves scalability, especially under high load. But misuse, like blocking on async code, can lead to deadlocks and thread starvation. Understanding this transformation helps you write safer async code. What was your biggest mistake when first working with async/await? #dotnet #csharp #asyncawait #backend #concurrency
To view or add a comment, sign in
-
-
𝐑𝐮𝐛𝐲 4 𝐂𝐨𝐧𝐜𝐮𝐫𝐫𝐞𝐧𝐜𝐲 𝐆𝐞𝐭𝐬 𝐑𝐞𝐚𝐥: 𝐑𝐚𝐜𝐭𝐨𝐫::𝐏𝐨𝐫𝐭 Ruby has historically balanced safety and developer happiness, but true parallelism was limited by the GVL. Ractors introduced actor-style concurrency, and now Ruby 4’s Ractor::Port makes them practical for real systems. Why it matters: • Explicit channels → clear communication between Ractors • Fan-out / fan-in pipelines → easy parallel job distribution and aggregation • No shared mutable state → safe multicore execution • Ideal for CPU-heavy workloads → image/video processing, analytics, simulations Not for web requests or typical Rails I/O — stick to threads and async I/O there. Ractor::Port turns Ractors from a curiosity into a tool for real concurrent architectures inside Ruby, all while keeping safety intact. #ruby #ruby4 #concurrency #multicore #ractors #softwaredevelopment #backend #programming #performance
To view or add a comment, sign in
-
Stop debugging race conditions at 3 AM. 🛑 For decades, multithreading has been the developer's nightmare. Shared state, unpredictable timing, and the dreaded "it works on my machine" data races have cost companies billions and sleepless nights. Enter Rust and its flagship promise: `Fearless Concurrency`. Unlike other languages where you hope your locks are correct, Rust's ownership model and borrow checker act as a strict gatekeeper. They don't just warn you; they `refuse to compile` your code if there's even a possibility of a data race. 🤯 Here is the mind-blowing stat: According to Google's Android team, after introducing Rust into their kernel development, memory safety vulnerabilities in their Rust-written components dropped to `zero`. Not "fewer," not "mostly fixed." Zero. This isn't theory; it's production reality. How does it work? 🔒 `Ownership`: Only one thread can own data at a time, preventing accidental simultaneous writes. 🧠 `Borrow Checker`: Ensures references to data are valid and safe before your code ever runs. ⚡ `Send` and `Sync` traits: These marker traits explicitly tell the compiler which types are safe to share across threads. The result? You write concurrent code with the confidence that if it compiles, it's safe. No more runtime surprises. As we move into 2026, with 45% of enterprises now running Rust in production, the shift from "hope-based" concurrency to "guaranteed-safe" concurrency is no longer optional for high-stakes systems. Have you tried writing multithreaded code in Rust yet? What was your biggest "aha" moment with the borrow checker? #Concurrency #MultiThreading #Coding #RustLang #SoftwareEngineering #MemorySafety #Concurrency,#MultiThreading,#Coding,#RustLang,#SoftwareEngineering,#MemorySafety Share your experience with Rust's concurrency model in the comments below! 👇
To view or add a comment, sign in
-
𝗧𝗵𝗲 𝗥𝗲𝗮𝗰𝘁 𝗖𝗼𝗺𝗽𝗶𝗹𝗲𝗿 𝗝𝘂𝘀𝘁 𝗠𝗮𝗱𝗲 𝘂𝘀𝗲𝗠𝗲𝗺𝗼 𝗢𝗯𝘀𝗼𝗹𝗲𝘁𝗲 I removed 142 useMemo calls from our production code last week. They were not wrong, but the React Compiler made them unnecessary. The v1 release of the React Compiler dropped quietly, but it handles work that you used to do manually. It tracks dependencies across components and knows when a value is stable. Here are three things it does better than you: - Cross-component dependency tracking - Automatic bailouts for unstable references - Tree-aware memoization boundaries Our Table component render time went from 18ms to 4ms. This is not a small optimization. It makes a big difference in performance. Note that the compiler does not optimize legacy class components or code that fights the compiler. If you are on React 19 and use functional components with hooks, try removing useMemo and useCallback from a single route and measure the performance. Source: https://lnkd.in/gBDxPAzm
To view or add a comment, sign in
-
Interesting post by Hamilton Greene about how to adapt Rust for developer experience and productivity. I agree with the proposed approach, simplify Rust, the performance and accuracy gains will remain. We must be pragmatic. https://lnkd.in/drRbSFjq #Rust
To view or add a comment, sign in
-
How can you statically verify that a #function is never called? Working with #Ferrocene in safety- and mission-critical software, that kind of guarantee can make a difference in your certification processes – whether you’re trying to ensure a certain path is never reached or just making sure [panic!] doesn’t ruin your day. Compiler team lead Jynn Nelson breaks down callgraph analysis and how we implemented for Ferrocene as a custom compiler driver. Dive in to learn: ⚡Different approaches you can take ⚡What we chose for Ferrocene and why ⚡Special requirements of [core] and certification ⚡How to try it yourself. If you’re business is compiler internals, static analysis or how Rust can support high-assurance development, then this is a #mustread of #RustLang. Read more: https://lnkd.in/duc8bShJ #SoftwareVerification #SafetyCritical #CompilerEngineering #SystemsProgramming
To view or add a comment, sign in
-
-
Rust is incredible, but it is not a universal answer. It shines when you care a lot about performance, safety, and predictability: high-throughput backend services, systems programming, infra tooling, proxies, runtimes, storage engines, and performance-critical components inside larger products. If you are pushing hardware limits or need C/C++-level speed with strong safety guarantees, Rust is a very strong choice. On the other hand, for quick prototypes, simple CRUD apps, or products that live inside heavy dynamic ecosystems (Python data stacks, JS-first teams, no-ops internal tools), Rust can be overkill. The learning curve, stricter compiler, and ecosystem tradeoffs are real costs. For me, the core takeaway from this series is simple: Rust gives you low-level performance with high-level safety and very predictable behavior, which makes it an ideal foundation for modern high-performance systems. Use it where those properties matter most, and do not feel guilty reaching for other languages when speed of iteration or ecosystem fit matters more. #Rust #PerformanceEngineering #SoftwareArchitecture
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development