STOP using the Cluster module for heavy computation. You’re literally burning memory 🔥 Most developers still confuse this: 👉 Cluster = scaling requests 👉 Worker Threads = crunching data Here’s the reality: Cluster module: • Spawns multiple processes • Each process has its own memory (V8 instance) • Great for handling high traffic (I/O) • ❌ Terrible for CPU-heavy work (wastes RAM) Worker Threads: • Runs inside a single process • Shares memory using ArrayBuffer • Built for parallel computation • ✅ Perfect for CPU-intensive tasks (image processing, encryption, data crunching) 💡 Rule of thumb: Use Cluster to scale users Use Worker Threads to scale performance If you're using Cluster for heavy calculations… you're solving the wrong problem. #NodeJS #JavaScript #BackendDevelopment #WebDevelopment #SoftwareEngineering #Programming #TechTips #SystemDesign #Performance #Developers
Abdul Rahman Adil’s Post
More Relevant Posts
-
C++ Bits: Post 3 The sizeof Trap: Why Is Your Class Bigger Than You Think? Quick what's the output of the code in the image given below? If you said both are 4... you just fell into one of C++'s most classic interview traps. sizeof(A) is indeed 4 bytes , just one int, nothing else. But sizeof(B)? On most 64-bit systems, it's 16 bytes. Not 4. Not even 12. Where did the extra 12 bytes come from? The moment you add the virtual keyword, the compiler secretly injects a hidden pointer called the vptr (virtual pointer) into every object of that class. This vptr points to a vtable , a per-class lookup table of function pointers that enables runtime polymorphism. On a 64-bit system, that hidden pointer is 8 bytes. So your object is now 4 (int) + 8 (vptr) = 12 bytes. But wait , the compiler also adds padding to align the object to an 8-byte boundary. So 12 becomes 16. Let's break it down: → class A: [int x = 4B] → Total: 4 bytes → class B: [vptr = 8B] + [int x = 4B] + [padding = 4B] → Total: 16 bytes One keyword. Triple the memory. And if you're allocating millions of objects, that difference isn't trivial ,it's a design decision. This is exactly why some performance-critical codebases (game engines, embedded systems) deliberately avoid virtual functions. The overhead isn't the virtual call , it's the memory bloat across millions of objects. Next time someone says "just make it virtual," remember: polymorphism isn't free. The compiler is quietly billing you 8 bytes per object, plus padding. What's the sneakiest sizeof result you've encountered in an interview or codebase? Let's hear it below. #CPP #Cplusplus #SoftwareEngineering #TechTrivia #Programming #BackendDevelopment
To view or add a comment, sign in
-
-
⚙️ Built a complete compiler toolchain from scratch, targeting a one-instruction computer Inspired by my digital electronics class with Dr. Charbel Fares, I explored what happens when you strip computation down to its absolute minimum: SUBLEQ (“subtract and branch if ≤ 0”) From there, I implemented a full pipeline What I built: • A SUBLEQ virtual machine • A compiler toolchain that translates a subset of C into SUBLEQ programs • A basic runtime model to support control flow and memory layout • End-to-end execution from high-level C code → raw one-instruction execution GitHub: https://lnkd.in/dvYeh5Ts The interesting part was realizing how much of computers are really just structures layered on top of a very small set of primitives We tend to think that computers are intelligent systems until we go deep into low-level stuff and realize that they're just deterministic machines This project sits somewhere between digital design and compilers, and helped me explore both If you’re into compilers, low-level systems, or minimal architectures, SUBLEQ is one of those rabbit holes that would be interesting to you :p
To view or add a comment, sign in
-
Are arr and &arr the same thing in C? 🤔 int arr[10]; They both point to the same physical starting memory address! But they are fundamentally different from the compiler. Let’s look at the math if our array starts at address 1000: 📍 arr (Pointer to the 1st element) ->Evaluates to address 1000 ->Adding 1 moves forward by 1 integer (4 bytes) ->arr + 1 = 1004 📦 &arr (Pointer to the entire array) ->Evaluates to address 1000 ->Adding 1 moves forward by the whole array size (40 bytes) ->&arr + 1 = 1040 (the slot just after the array) 💥 The "No Size of" Length Trick: Size of Array without using the sizeof() function. Because &arr + 1 points to the address just after the array (1040), subtracting the starting address (1000) from it yields the exact number of elements! int length = *(&arr + 1) - arr; // gives the result of 10 ⚠️ Note: Dereferencing a one-past-the-end pointer is undefined behavior in standard C — this trick is purely for understanding pointer arithmetic, not for production code. Why dereferencing *(&arr + 1) : *(&arr + 1) dereferences the array pointer, and the result naturally decays from int[10] to int * — so the subtraction gives you element count, not bytes. When you subtract two pointers in C, the compiler looks at their data type and says, "These are pointers to integers. The programmer wants to know how many integers fit between these two addresses, not the raw number of bytes." It automatically applies the scale division without requiring you to use the sizeof() operator in your code. Summary: They share the same address, but they have completely different "step sizes" in pointer arithmetic. #CProgramming #Pointers #CodingTips #SoftwareDevelopment #EmbeddedC #EmbeddedSystems
To view or add a comment, sign in
-
Part 3 of Routing Back to C is up. A hash table in plain C with chaining, buckets .... and a lot more https://lnkd.in/dG-nvQFJ [Part 3] #C #Programming #GameDev #LLM #LearnInPublic
To view or add a comment, sign in
-
I solved a major problem every hardware reverse engineer comes across at least once. It's not uncommon for hardware reverse engineers to come across an unfamiliar/niche architecture with limited tooling. You either have to compromise with an anarchic compiler from 2002 or write your own backend for the newest compilers around, both of which take an extraordinary amount of time to get working. While backends like LLVM exist, they still fail to simplify the process. I designed CHance–a modular, modern, and expressive programming language. CHance aims to sit between C and C#, matching C's low-level support with C#'s modern syntax. CHance uses its own IR, named ChanceCode. ChanceCode is designed to be sweet and simple, only using what you need. ChanceCode's compiler uses a modular backend system, allowing anyone to create a simple backend in minutes. The compiler leaves all control up to the backends, trusting their authority. The CHance toolchain doesn't end there! The CHance Assembler is the third step in the chain. It's incredibly easy to add your own architecture to it, emitting to whatever you desire. The final step is the CHance Linker, which can easily be modified to add a new architecture or new output format. If you believe this project has potential–or appreciate the work behind it–feel free to star the project on GitHub! Check it out in my projects: https://lnkd.in/gaj-xDTH
To view or add a comment, sign in
-
-
I recently built MathScale, a simple LLVM-based programming language frontend. The goal was to understand how a small language moves from source code to execution. MathScale takes mathematical expressions and language constructs, then walks through the compiler pipeline: Lexer -> Parser -> AST -> LLVM IR -> Optimization -> JIT Execution Some of the features I implemented: - Arithmetic operations - Variables and numeric expressions - User-defined functions - Function calls - Conditional expressions - Loop expressions - Comparison and boolean operators - Exponentiation support - LLVM ORC JIT-based execution This project helped me connect theory with real compiler engineering. Reading about lexical analysis, recursive descent parsing, AST construction, IR generation, and JIT compilation is one thing. Building a working language frontend with LLVM made those ideas much clearer. It was also a good reminder that even a “simple” language has many moving parts: grammar design, token handling, precedence rules, code generation, runtime execution, and build tooling. Project name: MathScale Tech stack: C++, LLVM, Clang, ORC JIT, PowerShell build scripts I’m excited to keep improving it and explore more compiler design concepts from here. GitHub repo: https://lnkd.in/gUwkrzhx #CompilerDesign #LLVM #Cpp #ProgrammingLanguage #SoftwareEngineering #ComputerScience #BuildInPublic #StudentProjects
To view or add a comment, sign in
-
-
Compiler design is the study of how to create a Compiler. A compiler converts high-level code like C++ or Java into machine-understandable instructions. 🔹 Example Source code: a = b + c; Compiler may translate it into lower-level steps like: LOAD b ADD c STORE a 🔹 Main Phases of Compiler Design 1. Lexical Analysis Breaks code into tokens. Example: int x = 5; → int, x, =, 5, ; Often uses Finite Automaton. 2. Syntax Analysis (Parsing) Checks grammar rules. Example: Is x = + 5 valid? Often uses Context-free grammar. 3. Semantic Analysis Checks meaning: variable declared? type correct? scope valid? 4. Intermediate Code Generation Creates simpler internal representation. 5. Optimization Makes code faster/smaller. Examples: remove unused code reduce repeated calculations 6. Code Generation Produces machine code or assembly. 🔹 Supporting Components Symbol table Error handler Runtime environment 🔹 Why It Matters Compiler design is core for: Programming languages Operating systems Embedded systems Performance optimization IDEs and developer tools 🔹 Real Compilers GCC Clang Javac Rust Compiler 🔹 Relation with TOC Compiler design uses: Automata theory Parsing theory Grammars Graph algorithms Optimization theory
To view or add a comment, sign in
-
💡𝐂#/.𝐍𝐄𝐓 𝐀𝐬𝐲𝐧𝐜 𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧 𝐓𝐢𝐩 🚀 💎𝐇𝐨𝐰 𝐚𝐧𝐝 𝐰𝐡𝐞𝐧 𝐭𝐨 𝐮𝐬𝐞 ‘𝐚𝐬𝐲𝐧𝐜’ 𝐚𝐧𝐝 ‘𝐚𝐰𝐚𝐢𝐭’ 💡 '𝐚𝐬𝐲𝐧𝐜' and '𝐚𝐰𝐚𝐢𝐭' keywords introduced in C# 5.0 were designed to make it easier to write asynchronous code, which can run in the background while other code is executing. The "async" keyword marks a method asynchronous, meaning it can be run in the background while another code executes. ⚡ When using 𝐚𝐬𝐲𝐧𝐜 and 𝐚𝐰𝐚𝐢𝐭 the compiler generates a state machine in the background. 🔥 Let's look at the other high-level details in the example; 🔸 𝐓𝐚𝐬𝐤<𝐢𝐧𝐭> 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 = 𝐋𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐀𝐬𝐲𝐧𝐜(); starts executing 𝐋𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧. 🔸 Independent work is done on let's assume the Main Thread (Thread ID = 1) then 𝐚𝐰𝐚𝐢𝐭 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 is reached. 🔸 Now, if the 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 hasn't finished and it is still running, 𝐃𝐨𝐒𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠𝐀𝐬𝐲𝐧𝐜() will return to its calling method, this the main thread doesn't get blocked. When the 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 is done then a thread from the ThreadPool (can be any thread) will return to 𝐃𝐨𝐒𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠𝐀𝐬𝐲𝐧𝐜() in its previous context and continue execution (in this case printing the result to the console). ✅ A second case would be that the 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 has already finished its execution and the result is available. When reaching the 𝐚𝐰𝐚𝐢𝐭 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 we already have the result so the code will continue executing on the very same thread. (in this case printing result to console). Of course this is not the case for in the example, where there's a 𝐓𝐚𝐬𝐤.𝐃𝐞𝐥𝐚𝐲(1000) involved. 🎯 𝐖𝐡𝐚𝐭 𝐝𝐨 𝐲𝐨𝐮 𝐭𝐡𝐢𝐧𝐤 𝐚𝐛𝐨𝐮𝐭 𝐚𝐬𝐲𝐧𝐜 𝐨𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬? #csharp #dotnet #programming #softwareengineering #softwaredevelopment
To view or add a comment, sign in
-
-
💡𝐂#/.𝐍𝐄𝐓 𝐀𝐬𝐲𝐧𝐜 𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧 𝐓𝐢𝐩 🚀 💎𝐇𝐨𝐰 𝐚𝐧𝐝 𝐰𝐡𝐞𝐧 𝐭𝐨 𝐮𝐬𝐞 ‘𝐚𝐬𝐲𝐧𝐜’ 𝐚𝐧𝐝 ‘𝐚𝐰𝐚𝐢𝐭’ 💡 '𝐚𝐬𝐲𝐧𝐜' and '𝐚𝐰𝐚𝐢𝐭' keywords introduced in C# 5.0 were designed to make it easier to write asynchronous code, which can run in the background while other code is executing. The "async" keyword marks a method asynchronous, meaning it can be run in the background while another code executes. ⚡ When using 𝐚𝐬𝐲𝐧𝐜 and 𝐚𝐰𝐚𝐢𝐭 the compiler generates a state machine in the background. 🔥 Let's look at the other high-level details in the example; 🔸 𝐓𝐚𝐬𝐤<𝐢𝐧𝐭> 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 = 𝐋𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐀𝐬𝐲𝐧𝐜(); starts executing 𝐋𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧. 🔸 Independent work is done on let's assume the Main Thread (Thread ID = 1) then 𝐚𝐰𝐚𝐢𝐭 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 is reached. 🔸 Now, if the 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 hasn't finished and it is still running, 𝐃𝐨𝐒𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠𝐀𝐬𝐲𝐧𝐜() will return to its calling method, this the main thread doesn't get blocked. When the 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 is done then a thread from the ThreadPool (can be any thread) will return to 𝐃𝐨𝐒𝐨𝐦𝐞𝐭𝐡𝐢𝐧𝐠𝐀𝐬𝐲𝐧𝐜() in its previous context and continue execution (in this case printing the result to the console). ✅ A second case would be that the 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 has already finished its execution and the result is available. When reaching the 𝐚𝐰𝐚𝐢𝐭 𝐥𝐨𝐧𝐠𝐑𝐮𝐧𝐧𝐢𝐧𝐠𝐓𝐚𝐬𝐤 we already have the result so the code will continue executing on the very same thread. (in this case printing result to console). Of course this is not the case for in the example, where there's a 𝐓𝐚𝐬𝐤.𝐃𝐞𝐥𝐚𝐲(1000) involved. 🎯 𝐖𝐡𝐚𝐭 𝐝𝐨 𝐲𝐨𝐮 𝐭𝐡𝐢𝐧𝐤 𝐚𝐛𝐨𝐮𝐭 𝐚𝐬𝐲𝐧𝐜 𝐨𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬? #csharp #dotnet #programming #softwareengineering #softwaredevelopment
To view or add a comment, sign in
-
-
🚀 Jetpack Compose — What actually happens inside @Composable? (Deep Dive) @Composable is not just an annotation. It's a promise to the compiler: 👉 "please transform me." Think of the Compose compiler like a secret assistant that rewrites your code before the JVM sees it. Step 1 — You write this @Composable fun Greeting(name: String) { Text("Hello, $name") } Step 2 — Compiler transformation The compiler secretly adds two hidden parameters: fun Greeting( name: String, $composer: Composer, $changed: Int ) • $composer → Tracks position in UI tree (SlotTable) • $changed → Bitmask → tells if inputs changed 👉 This is how Compose decides whether to skip execution Step 3 — Restart group (Recomposition scope) $composer.startRestartGroup(KEY) // UI code $composer.endRestartGroup()?.updateScope { c, _ -> Greeting(name, c, 1) } 👉 Registers a stored lambda 👉 Allows recomposition of ONLY this scope (not whole UI) Step 4 — Smart skipping At runtime, Compose checks: 👉 “Did anything change?” • If NO → entire function is skipped (zero work) • If YES → function re-executes 👉 This is the core performance optimization Step 5 — remember {} becomes SlotTable read val count = remember { mutableStateOf(0) } ➡️ Transforms into: val count = $composer.cache(false) { mutableStateOf(0) } 👉 Stored in SlotTable 👉 Retrieved by position 👉 Survives recomposition 🧠 Interview Summary "@Composable is a compiler transformation where functions are converted into restartable groups tracked by a Composer. A bitmask enables skipping, and stored lambdas allow recomposition of only affected scopes." ❓ Why can't @Composable be called from normal function? 👉 Because normal functions don’t have $composer ✔ Compile-time restriction 💬 This is a commonly asked deep-dive question in Android interviews #AndroidDevelopment #JetpackCompose #Kotlin #ComposeInternals #Recomposition #StateManagement #CleanArchitecture #MVVM #MVI #AndroidInterview #InterviewPreparation #SoftwareEngineer #MobileDeveloper #DeveloperLife #Programming #Coding #DevCommunity
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development