Are arr and &arr the same thing in C? 🤔 int arr[10]; They both point to the same physical starting memory address! But they are fundamentally different from the compiler. Let’s look at the math if our array starts at address 1000: 📍 arr (Pointer to the 1st element) ->Evaluates to address 1000 ->Adding 1 moves forward by 1 integer (4 bytes) ->arr + 1 = 1004 📦 &arr (Pointer to the entire array) ->Evaluates to address 1000 ->Adding 1 moves forward by the whole array size (40 bytes) ->&arr + 1 = 1040 (the slot just after the array) 💥 The "No Size of" Length Trick: Size of Array without using the sizeof() function. Because &arr + 1 points to the address just after the array (1040), subtracting the starting address (1000) from it yields the exact number of elements! int length = *(&arr + 1) - arr; // gives the result of 10 ⚠️ Note: Dereferencing a one-past-the-end pointer is undefined behavior in standard C — this trick is purely for understanding pointer arithmetic, not for production code. Why dereferencing *(&arr + 1) : *(&arr + 1) dereferences the array pointer, and the result naturally decays from int[10] to int * — so the subtraction gives you element count, not bytes. When you subtract two pointers in C, the compiler looks at their data type and says, "These are pointers to integers. The programmer wants to know how many integers fit between these two addresses, not the raw number of bytes." It automatically applies the scale division without requiring you to use the sizeof() operator in your code. Summary: They share the same address, but they have completely different "step sizes" in pointer arithmetic. #CProgramming #Pointers #CodingTips #SoftwareDevelopment #EmbeddedC #EmbeddedSystems
Rakesh Beeram’s Post
More Relevant Posts
-
Part 3 of Routing Back to C is up. A hash table in plain C with chaining, buckets .... and a lot more https://lnkd.in/dG-nvQFJ [Part 3] #C #Programming #GameDev #LLM #LearnInPublic
To view or add a comment, sign in
-
𝗥𝘂𝘀𝘁'𝘀 𝗭𝗲𝗿𝗼-𝗖𝗼𝘀𝘁 𝗔𝗯𝘀𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻𝘀 Rust promises zero-cost abstractions. You write high-level code. The compiler produces fast machine code. It works as if you wrote low-level code by hand. Monomorphization is the secret. You write one generic function. The compiler writes many. It generates a concrete copy for every type you use. If you use a function with integers and floats, the compiler creates two versions. One for integers. One for floats. The CPU never sees a generic type. It does not use vtables or boxing. This is why iterator chains are fast. The compiler collapses the chain into one loop. This speed comes with a price. - Binary size. Every unique type creates a new copy. Large projects get bigger binaries. This matters for WebAssembly. - Compile time. The compiler does heavy work during code generation. This makes Rust slow to compile. - Cache pressure. More code in memory causes instruction cache misses. You have two choices for dispatch. Static dispatch uses generics. The compiler resolves the target. It is fast. Dynamic dispatch uses trait objects. Use dyn Trait. The compiler creates one function. It uses a vtable lookup at runtime. It is slower but saves space. Use generics for performance. Use dyn Trait for mixed collections or when binary size matters. You check this with tools. Install cargo-show-asm. Run it on your functions. You will see separate assembly for different types. Zero-cost does not mean free. You move the cost to the compiler. You trade build time for runtime speed. Source: https://lnkd.in/gUEFaE3c
To view or add a comment, sign in
-
🚀 Day 30 of the DSA Grind: Translating Trees into Strings (and back again)! Today, I built a full two-way encoding engine from scratch for Serialize and Deserialize Binary Tree (LeetCode 297). This isn't just a traversal problem; it's an absolute masterclass in C++ memory management and string parsing. 🌳💻 Here is the breakdown: ⚙️ 1. The Serializer (Live Memory to String) I used a Level-Order (BFS) engine to flatten the tree. But there is a massive trap: you MUST explicitly record the null pointers (I used "#,"). If you skip them, the tree's physical shape is lost forever. The C++ Danger Zone: If you don't lock down your if/else blocks perfectly, your compiler will try to read the value of a NULL node and instantly hit you with a Segmentation Fault. Strict memory routing is required! ⚙️ 2. The Deserializer (String back to Live Memory) How do you turn a massive, flat string like "1,2,3,#,#,4,5," back into live TreeNode pointers? Enter the C++ stringstream. Instead of writing messy for loops to search for commas, I loaded the string into a stream. Using getline, it acted as an infinite dispenser, feeding exactly one node at a time into my BFS queue so I could perfectly wire up the left and right children. The Binary Tree architecture is officially locked in. The momentum is unreal right now! 🔥 #DSA #LeetCode #DataStructures #Algorithms #BinaryTrees #SoftwareEngineering #TechJourney #Coding #CPlusPlus #InterviewPrep
To view or add a comment, sign in
-
-
Why does C have the restrict keyword? At first, it feels unnecessary… until you understand what the compiler is thinking Consider this: void add(int *a, int *b) { for (int i = 0; i < 3; i++) { a[i] = a[i] + b[i]; } } Looks simple. But the compiler has a problem: What if a and b point to the same memory? So it plays safe: ->reloads values from memory ->avoids aggressive optimizations Now with restrict: void add(int *restrict a, int *restrict b) { for (int i = 0; i < 3; i++) { a[i] = a[i] + b[i]; } } You are telling the compiler: “These pointers do NOT overlap.” Now the compiler can: ->avoid unnecessary memory reads ->reuse registers ->generate faster code Key idea: Without restrict → compiler is cautious With restrict → compiler is confident #EmbeddedSystems #EmbeddedC #FirmwareDevelopment #CProgramming #LowLevelProgramming
To view or add a comment, sign in
-
Add two numbers without using + or -. I stared at this for a long time before it finally made sense. Sum of Two Integers - LeetCode 371 - Medium This problem has one rule — you cannot use + or - operators. My brain immediately said that is impossible. Addition is addition. How do you add without adding? Then I remembered how addition actually works at the hardware level. When a CPU adds two numbers it does not magically compute the sum. It uses XOR for the sum bits and AND with a left shift for the carry bits. That is literally how circuits work. So I replicated that. XOR of a and b gives the sum without carry. AND of a and b shifted left gives the carry. I keep doing this until there is no carry left. The mask 0xFFFFFFFF handles the edge case of negative numbers in Python since Python integers have infinite precision unlike other languages. What clicked for me: I was thinking like a programmer. This problem forced me to think like a hardware engineer. That perspective shift was everything. Key Learnings 1) XOR as Addition: a ^ b adds two numbers without considering carry. This is exactly how half adder circuits work in real hardware. 2) AND with Left Shift as Carry: (a & b) << 1 computes where the carry needs to go. This is the carry propagation step. 3) Python Mask Handling: Python has infinite precision integers so negative numbers behave differently. The 0xFFFFFFFF mask forces 32 bit behaviour to handle this correctly. Time and Space Complexity Time Complexity: O(1) — At most 32 iterations since integers are 32 bit. Space Complexity: O(1) — Only a few variables used. This problem taught me something no textbook ever did — addition is just XOR and carry in disguise. Did you know this is how your CPU actually adds numbers? Drop your thoughts below 👇 #LeetCode #BitManipulation #Blind75 #SDEPrep #DataStructures #Python #ProblemSolving #CodingJourney #Freshers #AmazonSDE
To view or add a comment, sign in
-
-
⚙️ Built a complete compiler toolchain from scratch, targeting a one-instruction computer Inspired by my digital electronics class with Dr. Charbel Fares, I explored what happens when you strip computation down to its absolute minimum: SUBLEQ (“subtract and branch if ≤ 0”) From there, I implemented a full pipeline What I built: • A SUBLEQ virtual machine • A compiler toolchain that translates a subset of C into SUBLEQ programs • A basic runtime model to support control flow and memory layout • End-to-end execution from high-level C code → raw one-instruction execution GitHub: https://lnkd.in/dvYeh5Ts The interesting part was realizing how much of computers are really just structures layered on top of a very small set of primitives We tend to think that computers are intelligent systems until we go deep into low-level stuff and realize that they're just deterministic machines This project sits somewhere between digital design and compilers, and helped me explore both If you’re into compilers, low-level systems, or minimal architectures, SUBLEQ is one of those rabbit holes that would be interesting to you :p
To view or add a comment, sign in
-
𝗖𝗼𝗿𝗲𝗯𝗼𝗼𝘁 𝗕𝗼𝗼𝘁 𝗦𝘁𝗮𝗴𝗲𝘀 - 𝗧𝗵𝗲 𝗧𝗿𝘂𝘁𝗵 𝗧𝗵𝗮𝘁 𝗞𝗲𝗲𝗽𝘀 𝗬𝗼𝘂 𝗦𝗮𝗻𝗲 𝗪𝗵𝗶𝗹𝗲 𝗗𝗲𝗯𝘂𝗴𝗴𝗶𝗻𝗴 If you’ve worked with coreboot long enough, you’ve probably heard: 𝘣𝘰𝘰𝘵𝘣𝘭𝘰𝘤𝘬 → 𝘳𝘰𝘮𝘴𝘵𝘢𝘨𝘦 → 𝘳𝘢𝘮𝘴𝘵𝘢𝘨𝘦 Sounds simple until you start debugging and everything feels confusing. Here’s the missing piece most explanations don’t tell you 𝗧𝗵𝗲 𝗞𝗲𝘆 𝗜𝗻𝘀𝗶𝗴𝗵𝘁 - 𝗦𝗮𝗺𝗲 𝗖𝗼𝗱𝗲, 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗪𝗼𝗿𝗹𝗱𝘀 Coreboot stages are 𝗻𝗼𝘁 𝗱𝗲𝗳𝗶𝗻𝗲𝗱 𝗯𝘆 𝗱𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝗶𝗲𝘀. They are created by: 𝘉𝘶𝘪𝘭𝘥-𝘵𝘪𝘮𝘦 𝘦𝘯𝘷𝘪𝘳𝘰𝘯𝘮𝘦𝘯𝘵𝘴 𝘵𝘩𝘢𝘵 𝘤𝘰𝘮𝘱𝘪𝘭𝘦 𝘵𝘩𝘦 𝘴𝘢𝘮𝘦 𝘤𝘰𝘥𝘦 𝘪𝘯𝘵𝘰 𝘥𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘵 𝘣𝘪𝘯𝘢𝘳𝘪𝘦𝘴 Using macros like: ENV_BOOTBLOCK, ENV_ROMSTAGE, ENV_RAMSTAGE #𝘪𝘧 𝘌𝘕𝘝_𝘉𝘖𝘖𝘛𝘉𝘓𝘖𝘊𝘒 𝘦𝘢𝘳𝘭𝘺_𝘤𝘱𝘶_𝘪𝘯𝘪𝘵(); #𝘦𝘯𝘥𝘪𝘧 #𝘪𝘧 𝘌𝘕𝘝_𝘙𝘖𝘔𝘚𝘛𝘈𝘎𝘌 𝘥𝘳𝘢𝘮_𝘪𝘯𝘪𝘵(); #𝘦𝘯𝘥𝘪𝘧 𝗔𝘁 𝗰𝗼𝗺𝗽𝗶𝗹𝗲 𝘁𝗶𝗺𝗲: * Only relevant code is included * Everything else is removed 𝗖𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗥𝗲𝗮𝗹𝗶𝘁𝘆 𝗖𝗵𝗲𝗰𝗸 These macros 𝘥𝘰 𝘕𝘖𝘛 𝘦𝘹𝘪𝘴𝘵 𝘢𝘵 𝘳𝘶𝘯𝘵𝘪𝘮𝘦 There is: no stage variable, no runtime branching, no dynamic switching 𝗜𝗻𝘀𝘁𝗲𝗮𝗱: Each stage is a completely separate binary 𝗪𝗵𝗮𝘁 𝘁𝗵𝗲 𝗕𝘂𝗶𝗹𝗱 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗣𝗿𝗼𝗱𝘂𝗰𝗲𝘀 * bootblock.elf → tiny, runs from ROM * romstage.elf → runs in Cache-as-RAM * ramstage.elf → runs in DRAM And here’s where things get more interesting 👇 Each stage is built with: * Different 𝗰𝗼𝗺𝗽𝗶𝗹𝗲𝗿 𝗳𝗹𝗮𝗴𝘀 (size vs features) * Different 𝗹𝗶𝗻𝗸𝗲𝗿 𝘀𝗰𝗿𝗶𝗽𝘁𝘀 (memory layout matters!) 𝗪𝗵𝘆? Because each stage runs in a completely different environment: * Bootblock - Flash (very limited) * Romstage - Cache-as-RAM (CAR) * Ramstage - DRAM (fully available) So each binary must be tailored precisely for where it runs. 𝗦𝗼 𝗛𝗼𝘄 𝗗𝗼 𝗧𝗵𝗲𝘀𝗲 𝗕𝗶𝗻𝗮𝗿𝗶𝗲𝘀 𝗙𝗶𝗻𝗱 𝗘𝗮𝗰𝗵 𝗢𝘁𝗵𝗲𝗿? They’re all stored inside: 𝗖𝗼𝗿𝗲𝗯𝗼𝗼𝘁 𝗙𝗶𝗹𝗲 𝗦𝘆𝘀𝘁𝗲𝗺 (𝗖𝗕𝗙𝗦) CBFS is a tiny filesystem embedded in flash that contains: romstage, ramstage, payload, configuration data 𝗥𝘂𝗻𝘁𝗶𝗺𝗲 𝗙𝗹𝗼𝘄 Reset → Bootblock ↓ (loads from CBFS) Romstage ↓ (loads from CBFS) Ramstage ↓ Payload 𝗘𝗮𝗰𝗵 𝘀𝘁𝗮𝗴𝗲: * knows how to locate the next stage in CBFS * loads it into the appropriate memory * transfers control 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗜𝗻𝘀𝗶𝗴𝗵𝘁 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 (𝗔 𝗟𝗢𝗧) When debugging, if you don’t understand this, you’ll ask: “Why isn’t my code running?” 𝘉𝘶𝘵 𝘵𝘩𝘦 𝘳𝘦𝘢𝘭 𝘢𝘯𝘴𝘸𝘦𝘳 𝘮𝘪𝘨𝘩𝘵 𝘣𝘦: * It was compiled into a 𝘥𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘵 𝘴𝘵𝘢𝘨𝘦 * Or excluded by `ENV_*` * Or placed in a binary that never executes 𝗙𝗶𝗻𝗮𝗹 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆 Understanding build-time macros vs runtime execution + CBFS handoff is the only way to stay sane while debugging because: * Macros decide what gets compiled * Linker scripts decide where it lives * CBFS decides how stages find each other * Runtime just executes pre-built binaries
To view or add a comment, sign in
-
📅 Day 12: 𝒑𝒓𝒊𝒏𝒕𝒇 is 53 years old. cout chaining is an eyesore. C++20 finally fixed string formatting. We've all written it. The cout chain that wraps across three lines just to print two values. Or the 𝒑𝒓𝒊𝒏𝒕𝒇 format string, where a single wrong specifier %𝒅 instead of %𝒍𝒅 silently corrupts your output or crashes at runtime. No compiler warning. No type checking. Just undefined behavior waiting to happen. 𝒔𝒕𝒅::𝒇𝒐𝒓𝒎𝒂𝒕 lands in C++20 and changes everything. Python-style {} placeholders, fully type-safe, no format specifiers to memorize. The compiler knows what type you're passing. It formats it correctly. End of story. And it's not just cleaner, it's faster. iostream carries significant overhead from locale handling and the synchronized stream machinery. 𝒔𝒕𝒅::𝒇𝒐𝒓𝒎𝒂𝒕 skips all of that. Benchmarks consistently put it ahead of cout for string construction, often significantly. 🧠 Key insight: printf is unsafe because the format string and the arguments are disconnected, and the compiler can't match them. 𝒄𝒐𝒖𝒕 is safe, but composing output is syntactic noise. 𝒔𝒕𝒅::𝒇𝒐𝒓𝒎𝒂𝒕 gives you the readability of one and the safety of the other, with better performance than both. Worth knowing: 𝒔𝒕𝒅::𝒇𝒐𝒓𝒎𝒂𝒕 returns a 𝒔𝒕𝒅::𝒔𝒕𝒓𝒊𝒏𝒈. Use 𝒔𝒕𝒅::𝒑𝒓𝒊𝒏𝒕 (C++23) to write directly to stdout without constructing one. Format specifiers still exist for precision and alignment {:.2𝒇}, {:>10}, but you only reach for them when you need them. · Custom types can opt in via a 𝒔𝒕𝒅::𝒇𝒐𝒓𝒎𝒂𝒕𝒕𝒆𝒓<𝑻> specialization of your types, your formatting rules. · Not on C++20 yet? The {𝒇𝒎𝒕} library is the identical open-source predecessor. Same API, drop-in ready. Write format strings that read like sentences. Not code that reads like noise. Day 12 of my C++ deep-dive series. Missed Day 11? Go check out the composition over inheritance breakdown. Still on printf or cout in your codebase? What's blocking the move to 𝒔𝒕𝒅::𝒇𝒐𝒓𝒎𝒂𝒕? 👇 #cpp #cplusplus #cpp20 #programming #softwareengineering
To view or add a comment, sign in
-
-
C++ functions are simple, let me explain: There's two type of functions: free/normal functions and class functions. A class function, also referred to as a "method", applies to an instance of the class (an object). Under the hood it's just a normal function taking a pointer to the instance as argument. Except when the function is static, then it does not apply to an instance. There are special functions in classes, like the default, move and assignment constructors. There are also operators, which means for example you can write classA + classB and implement custom logic for what should happen. Functions can be inline or externally defined. Class functions that are defined when declared are implicitely inline. Class functions can be marked as const, which can be called on const objects. Functions can also be constexpr or consteval but that's all about moving computation to compile time. You're still with me? When defining the function call operator() for a class, you can treat the class as a function. This is useful when you want to pass it as a function into another function. Lambdas are syntactic sugar around function objects. std::function is not a function. It's an object, wrapping anything callable, including function pointers, or function objects. A lambda should not be confusing with std::function but you can put a lamba inside an std::function. That's all. Oh no, I forgot to mention virtual functions, template functions, throwing vs non throwing functions, deleted functions, user generated functions vs compiler generated functions, non returning functions, coroutines, function contracts... 🤯 Simple, right? 😉 Join my newsletter: https://lnkd.in/eqzAYgs6
To view or add a comment, sign in
-
In C++, both prefix (++obj) and postfix (obj++) have the same name: operator++. The compiler wouldn't know which one to call when it sees obj++. Since C++ does not allow overloading based solely on the return type, the language designers needed a way to change the "signature" of the function. To solve this, the C++ standard decided that: Prefix (++obj): Takes no arguments. Postfix (obj++): Takes a dummy int argument. When you write obj++, the compiler internally calls obj.operator++(0). You never actually pass a number to it; the compiler inserts a 0 automatically just to satisfy the function signature. the exact reason why postfix is slower than prefix: Overloading operator++(int) { Overloading tmp; // 1. Create a copy (Old state) tmp.n = this->n; // 2. Save current value this->n = this->n + 1; // 3. Increment original return tmp; // 4. Return the OLD state by VALUE } Return by Value: Postfix must return by value because the tmp object is a local variable. It will be destroyed when the function ends, so you cannot return a reference to it. In post-increment (obj++), the rule is: "Use the value first, then increment it." To achieve this in code: You create a local copy (Overloading tmp) to "remember" what the object . In C++, you cannot return a reference to a local variable. The tmp object is created on the Stack within the scope of the operator++ function. The moment the function reaches the return statement and exits, the stack frame is destroyed and tmp ceases to exist. If you tried to return Overloading&, you would be returning a "Dangling Reference" to a memory location that has already been reclaimed. This would lead to non-deterministic crashes. #include <iostream> using namespace std; class Overloading { private: int d; public: Overloading():d(10) { //Parmetrized Constructor is invoked cout <<"parmetrized Constructor is Invoked " << this->d << endl; } Overloading& operator ++() { ++d; return *this; } Overloading operator++(int) { Overloading temp; temp = *this; ++d; return temp; } ~Overloading() { // Default Destructor to destroy the Object cout <<"Default Destructor is Invoked " << endl; } friend ostream& operator<<(ostream& out, const Overloading& obj) { out << obj.d; // We choose to print the value of 'd' return out; // Return the stream to allow chaining } }; int main() { Overloading obj1; Overloading obj2; obj2 = obj1++; //Post cout <<"post Incre " << obj1 << endl; cout <<"Post obj2 " << obj2<< endl; obj2 = ++obj1; //Preincrement Operator Overloading cout <<"Pre Incre" << obj1 << endl; cout <<"Ptre incre obj2" << obj2 << endl; return 0; } o/p: parmetrized Constructor is Invoked 10 parmetrized Constructor is Invoked 10 Default Destructor is Invoked post Incre 11 Post obj2 10 Pre Incre12 Ptre incre obj212 Default Destructor is Invoked Default Destructor is Invoked
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development