💡 Understanding union vs std::variant in C++ (from low-level to modern safety) Let’s start with a classic C++ concept: union. A union allows multiple members to share the same memory location. Instead of allocating space for each field, it uses only enough memory for the largest one. union Value { Node* p; int i; }; Here: p and i occupy the same address The union size = max(sizeof(Node*), sizeof(int)) 👉 This is efficient — but comes with a cost. ⚠️ The Problem A union does not track which value is active. Value v; v.i = 42; // Later... std::cout << v.p; // ❌ Undefined behavior You’re responsible for remembering what’s stored. That’s why we often add a manual tag: enum class Type { ptr, num }; struct Entry { Type t; Value v; }; And use it like: if (entry.t == Type::num) std::cout << entry.v.i; This pattern is called a tagged union. 🧠 The Modern C++ Solution: std::variant C++ now gives us a safer, cleaner alternative: #include <variant> using Value = std::variant<Node*, int>; Now: The type knows what it currently holds No manual tag needed No undefined behavior (if used correctly) ✅ Accessing the value safely Value v = 42; if (std::holds_alternative<int>(v)) { std::cout << std::get<int>(v); } Or even better: std::visit([](auto&& val) { std::cout << val; }, v); In modern C++, the default choice should almost always be: 👉 std::variant over raw union #cpp #cplusplus #moderncpp #programming #softwareengineering #systemsprogramming #lowlevel #memorymanagement #cpp17 #developers
Ihor Shevchenko’s Post
More Relevant Posts
-
Most people think "new" just creates an object. It actually does two things. When you write: int* p = new int(10); It: • Allocates memory • Constructs the object But what if you already have memory? And only want to construct an object there? That’s where placement new comes in. char buffer[sizeof(int)]; int* p = new (buffer) int(10); Here: • No memory allocation happens • The object is constructed inside existing memory This means: Allocation and construction are actually separate steps in C++. Why this matters: • Custom memory management • Object pools • Performance-critical systems But there’s a catch. You are responsible for: • Managing the memory • Calling the destructor manually This is not something you use every day. But it shows how much control C++ gives over memory and object lifetimes. Didn’t expect this level of control at first — kinda blew my mind. #cpp #cplusplus #systems #softwareengineering #programming
To view or add a comment, sign in
-
I used to add std::move everywhere to "optimise" my C++. Turns out... I was sometimes making things slower. Two surprising cases: 1. It can block compiler optimisations. For instance, this: return std::move(local); can prevent Named Return Value Optimisation (NRVO/RVO) because it forces the compiler to treat local as an rvalue, preventing it from eliding the copy/move entirely. So instead of zero-cost, I was forcing an extra move. 2. It is pointless for small types. For trivially copyable types, moving is effectively the same as copying. Example: struct Point { int x,y; } Point a{1,2}; Point b = std::move(a); //no real gain since a is effectively copied We are not saving anything here; in fact, it could potentially add extra noise. One big takeaway for me is: std::move is not an optimisation, it is more of a signal. It tells the compiler "I am done with the object, you can steal its resources". So now, I only use it when it actually matters, such as types that own heap memory. What surprised you most about std::move when you learned it properly? #CPlusPlus #optimization #moderncpp
To view or add a comment, sign in
-
Reading through the upcoming C++26 changes, I ran into an example that captures a recurring tension in modern C++ very well. // Predicate: select monsters that are currently dead. auto dead = [](const auto& monster) { return monster.isDead(); }; // Problematic pattern: // We iterate over a filtered view and then mutate the same property // that the filter depends on. for (auto& monster : monsters | std::views::filter(dead)) { monster.bringBackToLife(); // Undefined behavior } At first glance, this feels surprising. C++26 proposes this form instead: // Treat the range as input-only before filtering. // This avoids the multi-pass assumptions that make the previous code invalid. auto dead = [](const auto& monster) { return monster.isDead(); }; for (auto& monster : monsters | std::views::as_input | std::views::filter(dead)) { monster.bringBackToLife(); // OK } What makes this interesting is not just the rule itself, but the feeling it creates. When a language needs an extra adapter to make a very intuitive loop valid, it raises a deeper design question: is the programmer doing something exotic, or is the abstraction exposing machinery that leaks too much into everyday code? Examples like this still make C++ feel as if correctness sometimes depends less on what the code means, and more on whether you remembered the exact operational contract of the pipeline. This is one of those moments where modern C++ feels incredibly powerful, but also slightly at odds with how humans naturally expect code to behave. #cpp #cpp26 #programming #softwareengineering #systemsprogramming
To view or add a comment, sign in
-
🚫 Stop passing 𝒄𝒐𝒏𝒔𝒕 𝒔𝒕𝒅::𝒔𝒕𝒓𝒊𝒏𝒈& when you don't need to. There's a better way, and it's been in C++17 since 2017. Meet 𝒔𝒕𝒅::𝒔𝒕𝒓𝒊𝒏𝒈_𝒗𝒊𝒆𝒘. Here's a classic mistake I see in C++ codebases. When you pass "hello" to a 𝒄𝒐𝒏𝒔𝒕 𝒔𝒕𝒅::𝒔𝒕𝒓𝒊𝒏𝒈&, the runtime silently allocates a new 𝒔𝒕𝒅::𝒔𝒕𝒓𝒊𝒏𝒈. With 𝒔𝒕𝒓𝒊𝒏𝒈_𝒗𝒊𝒆𝒘? Nothing. Zero overhead. It's just a pointer + length pair pointing to an existing buffer. Where it really shines is substring parsing, no new, no heap, no copy. You're just sliding a window over existing memory. 🧠 Key insight: 𝒔𝒕𝒓𝒊𝒏𝒈_𝒗𝒊𝒆𝒘 is perfect for read-only string processing parsers, tokenizers, log analyzers, and protocol handlers. Anywhere you're slicing and inspecting strings without modifying them. Two caveats worth knowing: · Don't store a 𝒔𝒕𝒓𝒊𝒏𝒈_𝒗𝒊𝒆𝒘 if the underlying string can be destroyed, dangling reference territory. · It's non-owning by design. That's the whole point. Available since C++17. If you're not using it, you're leaving free performance on the table. This is Day 1 of my C++ deep-dive series. Follow along if you want to write faster, leaner C++. What's your go-to use case for string_view? Drop it below 👇 #cpp #cplusplus #programming #softwareengineering #performanceprogramming
To view or add a comment, sign in
-
-
There was a project some years ago where the customer wanted the firmware written in C++. Object oriented, clean abstractions, the works. The hardware was a small microcontroller with extremely limited RAM and no dynamic memory allocation — no heap, effectively. C++ without new and delete feels like cooking without fire. Technically possible. Deeply uncomfortable at first. The team's initial reaction was to push back and ask for either more capable hardware or permission to use C. Neither was on the table. The hardware was fixed. The language preference was fixed. So we had to figure it out. What followed was one of the more interesting firmware exercises I have been part of. We used placement new to work with statically allocated memory pools. Every object had a fixed home determined at compile time. Templates replaced what would normally be runtime polymorphism. The linker script became something we understood intimately rather than accepted as given. It was constrained. It was occasionally maddening. But the output was genuinely clean firmware that the customer's team — used to higher level C++ — could read and maintain without needing to relearn how to think. The lesson I took from it was not a technical one. It was that constraints force a quality of thinking that comfort never does. Some of the cleanest designs I have seen came out of situations where the easy path was simply not available. The projects that stretch you the most are rarely the ones with the best specs. What is the tightest constraint a project has ever pushed you through? #CPlusPlus #EmbeddedSystems #Firmware #Engineering #PandianPosts #Embien
To view or add a comment, sign in
-
This project is a DLL injector developed in C++ that allows users to inject external DLL files into target processes. It supports multiple injection methods and manual mapping features, enabling code execution and dependency resolution. 🔗 https://lnkd.in/gaMEk-Ez
To view or add a comment, sign in
-
BASIC Variables Made Easier 📔 I use the VS64 extension in VSCODE for my C64 BASIC development, and I really like the auto numbering and labeling, but variables are still no fun to keep track of. Each time I come up with a new system, I forget what they mean and still end up switching files to figure out what them out. So I decided to create a step in the build process for auto assigning variable names. Similar to labels, I define a name on the first definition using an alias, and then that along with all the other references get updated in the build. I haven't fully settled on the exact implementation yet, but so far it works great. So it works like this: First the variable is assigned using an alias like this: @screenRegister=1024 Then it's used as normal, without the @, anywhere in the code like this: POKE screenRegister, 81 When the code is built it gets converted to this: A1=1023 POKE A1, 81 It uses names from A1 to ZZ, so single character ones that I commonly use are not affected, such as i, x, and y. It will also make sure the name doesn't exist first and will just go to the next name if it does. #commodore46 #basicprogramming
To view or add a comment, sign in
-
-
Mutex might sound like the solution here, but you also essentially lose the parallelism by making the thread wait for other threads to finish work. Things LIKE atomics should be considered the defacto answer, mutexes are a last resort.
Debugging a Race Condition in C++ While working on a multithreaded module, I encountered an unexpected issue. Even though the logic looked correct, the final output was inconsistent every time. Problem: Multiple threads were updating a shared variable without synchronization. Expected: 200000 Actual: Random incorrect values After debugging, I found the root cause: Race Condition due to non-atomic operation (counter++) Key Learning: In multithreading, even a simple increment operation is not safe. It involves: Read → Modify → Write When multiple threads execute this simultaneously, data gets corrupted. Solution: Used mutex to synchronize access to shared resource. Also improved code using: lock_guard<mutex> for better safety and readability. Takeaway: Never trust shared data in multithreaded environments without proper synchronization. #cpp #multithreading #racecondition #concurrency #debugging
To view or add a comment, sign in
-
-
“Inheritance is the base class of evil.” — Sean Parent Sounds dramatic… until you realize how easily inheritance becomes a design trap. What went wrong? Inheritance got overused for code reuse instead of modeling true relationships. That led to: ❌ Tight coupling ❌ Fragile base classes ❌ Class explosion (see diagram 👇) ❌ Painful refactoring Once hierarchies grow, change becomes risky. The real problem Inheritance mixes multiple axes of change: What something is (Shape) How it behaves (Rendering) When both evolve → design starts to break. ✅ What modern C++ design prefers 👉 Favor composition over inheritance Separate responsibilities Inject behavior (Strategy) Keep dependencies minimal ⚙️ Even the C++ standard library reflects this → type erasure instead of inheritance → value semantics, no base class → composition for ownership Algorithms + iterators → decoupled design 👉 Behavior is composed, not inherited. Thanks and credit to Klaus Iglberger for the insights from 📘 C++ Software Design #cpp #softwaredesign #cleanarchitecture #designpatterns #cplusplus #oop
To view or add a comment, sign in
-
-
I came across std::array in some C++ code recently and had an honest realization — I almost never reach for it in my day-to-day work. For simple use-cases, raw C-style arrays still feel more direct. No wrappers, no abstractions — just a fixed block of memory that does exactly what you expect. But when things get even slightly complex, my instinct immediately shifts to std::vector. Dynamic sizing, cleaner memory management, and overall flexibility make it hard to ignore. So where does std::array really fit? It is safer than C arrays — no decay to pointers, better integration with STL algorithms, and compile-time size guarantees. But in practice, it often sits in this awkward middle ground: * Too “structured” for trivial use * Too limited for dynamic scenarios That said, I’ve started noticing a few areas where it actually makes sense: * Fixed-size buffers where size is known at compile time * Performance-critical paths where heap allocation must be avoided * Interfacing with APIs that expect contiguous memory but still want type safety * Embedded or low-level systems where predictability matters I’m curious — do you actively use std::array, or does it also fall into that “I know it, but rarely use it” category for you? #cpp #cplusplus #softwareengineering #systemprogramming #embeddedsystems #stl #coding #developers #programming
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Another benefit to using std::variant is that it automatically handles lifetimes and RAII behavior for the value(s) it contains. A bare union may have less overhead in some cases, but a lot of resource management becomes manual again.