🔥 𝐒𝐭𝐫𝐢𝐧𝐠 𝐯𝐬 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫 𝐯𝐬 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫 𝐢𝐧 𝐉𝐚𝐯𝐚 — 𝐒𝐭𝐨𝐩 𝐂𝐨𝐧𝐟𝐮𝐬𝐢𝐧𝐠 𝐓𝐡𝐞𝐦! This is one of the most asked Java interview questions — yet most developers can't explain the difference clearly. Let me fix that 👇 🔵 𝐒𝐭𝐫𝐢𝐧𝐠 — 𝐈𝐦𝐦𝐮𝐭𝐚𝐛𝐥𝐞 & 𝐓𝐡𝐫𝐞𝐚𝐝-𝐒𝐚𝐟𝐞 𝐒𝐭𝐫𝐢𝐧𝐠 𝐬𝟏 = "𝐇𝐞𝐥𝐥𝐨"; 𝐒𝐭𝐫𝐢𝐧𝐠 𝐬𝟐 = 𝐬𝟏 + " 𝐖𝐨𝐫𝐥𝐝"; // creates a NEW object every time! 𝐒𝐭𝐫𝐢𝐧𝐠 𝐬𝟑 = "𝐇𝐞𝐥𝐥𝐨"; // s1 == s3 → true (same String pool reference) // s1 == s2 → false (s2 is a brand new object) ✅ Stored in String Pool — memory efficient for reuse ✅ Thread-safe by design (immutable) ❌ Every + or concat() creates a new object — bad in loops! 🩷 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫 — 𝐌𝐮𝐭𝐚𝐛𝐥𝐞 & 𝐅𝐚𝐬𝐭 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫 𝐬𝐛 = 𝐧𝐞𝐰 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫(); 𝐬𝐛.𝐚𝐩𝐩𝐞𝐧𝐝("𝐇𝐞𝐥𝐥𝐨").𝐚𝐩𝐩𝐞𝐧𝐝(" 𝐖𝐨𝐫𝐥𝐝"); // same object 𝐬𝐛.𝐢𝐧𝐬𝐞𝐫𝐭(𝟎, "𝐒𝐚𝐲: "); 𝐬𝐛.𝐫𝐞𝐯𝐞𝐫𝐬𝐞(); 𝐒𝐭𝐫𝐢𝐧𝐠 𝐫𝐞𝐬𝐮𝐥𝐭 = 𝐬𝐛.𝐭𝐨𝐒𝐭𝐫𝐢𝐧𝐠(); ✅ Modifies the same object — no new allocations ✅ Fastest option for string manipulation ❌ NOT thread-safe — don't share between threads 🟣 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫 — 𝐓𝐡𝐫𝐞𝐚𝐝-𝐒𝐚𝐟𝐞 𝐛𝐮𝐭 𝐒𝐥𝐨𝐰𝐞𝐫 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫 𝐬𝐛 = 𝐧𝐞𝐰 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫(); 𝐬𝐛.𝐚𝐩𝐩𝐞𝐧𝐝("𝐇𝐞𝐥𝐥𝐨"); // synchronized 𝐬𝐛.𝐚𝐩𝐩𝐞𝐧𝐝(" 𝐖𝐨𝐫𝐥𝐝"); // Same API as StringBuilder, but all methods are synchronized ✅ Thread-safe — safe for multi-threaded access ❌ Synchronization adds overhead — slower than StringBuilder 📊 𝐐𝐮𝐢𝐜𝐤 𝐂𝐨𝐦𝐩𝐚𝐫𝐢𝐬𝐨𝐧 𝐅𝐞𝐚𝐭𝐮𝐫𝐞 𝐒𝐭𝐫𝐢𝐧𝐠 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫 Mutable? ❌ No ✅ Yes ✅ Yes Thread-safe? ✅ Yes ❌ No ✅ Yes Speed Slowest* Fastest Moderate Use case Constants Loops Multi-thread *+ in a loop is slow. Compiler may optimize single-line concatenation. 💡 Golden Rule: Use 𝐒𝐭𝐫𝐢𝐧𝐠 for fixed values. Use 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫 for manipulation in single-threaded code. Use 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫 only when multiple threads share the same buffer. Drop a 🔥 if this cleared your confusion! Tag a Java dev who still uses + inside loops 😄 👇 Which one do you use most in your projects? #Java #String #StringBuilder #StringBuffer #CoreJava #Backend #SpringBoot #JavaDeveloper #100DaysOfCode #InterviewPrep #Programming
String vs StringBuilder vs StringBuffer in Java
More Relevant Posts
-
Why Dart Wants You to Stop Using Static-Only Classes If you’re like me and your brain is wired for Java or C#, you’re used to the "Utility Class" pattern. We’ve all written dozens of StringUtils or Config classes filled with public static members because, in those languages, a function must have a home inside a class. But then you move to Dart, and the linter starts shouting at you: "Avoid defining classes with only static members." At first, I hated this. It felt messy. It felt "un-OOP." But after digging deeper, I realized I was fighting the language instead of using its strengths. The "Namespace" Argument In C#, we use classes as namespaces. In Dart, the file (library) is the namespace. Instead of forcing a class to act as a container, Dart allows Top-Level members. You can define variables and functions directly in the file. Why I was against it (and why I changed my mind): The Organization Fear: I thought my global scope would become a mess. The Dart Solution: You can import files with a prefix. import 'utils.dart' as utils; allows you to call utils.calculate()—achieving the same organization without the class boilerplate. Tree Shaking: Dart is designed for the web and mobile. Top-level functions are much easier for the compiler to "tree-shake" (remove if unused), resulting in smaller, faster apps. The "Static" Lie: A class with only static members is just a library in disguise. Why wrap it in a class keyword if you never intend to create an instance of the class or use inheritance? The Compromise If you really can't let go of the grouping (like for AppColors or AppIcons), Dart suggests adding a private constructor so the class can’t be instantiated: class AppConstants { const AppConstants._(); // The private constructor static const String apiKey = "12345"; } The Verdict: Dart isn't asking you to be unorganized; it's asking you to stop using 1990s workarounds for a language that supports top-level logic. What do you think? Is the "Utility Class" a hill you're willing to die on, or are you embracing the top-level freedom? #SoftwareEngineering #Java #CSharp #Dart #Flutter #CleanCode #Java #EngineeringDiscipline
To view or add a comment, sign in
-
DotNet Blog: Union types expose a long-ignored gap in C#: type safety was often claimed, but rarely enforced at the boundaries where it matters most. C# has historically relied on conventions, null checks, and defensive programming to simulate closed data models, yet this approach quietly shifts complexity into runtime logic. With union types in C# 15, the language finally introduces a construct that enforces a finite set of possible states at compile time, backed by exhaustive pattern matching. The real shift is not syntactic, but semantic: instead of asking developers to remember every possible case, the compiler now demands completeness. This directly challenges a widespread assumption that flexibility equals robustness. In practice, unconstrained models tend to accumulate edge cases, while closed unions force deliberate design decisions early. However, this also introduces friction. Existing architectures built on loosely typed DTOs or polymorphic hierarchies will not adapt without refactoring. The promise of safer code comes at the cost of stricter modeling discipline, and not every team is prepared to embrace that trade-off. Union types are less about new capability and more about correcting a structural weakness in how state is represented. Ignoring that correction means continuing to rely on patterns that fail silently instead of failing fast, which is precisely where many production issues originate. In that sense, this is not just a feature, but a subtle shift toward more explicit and verifiable domain modeling within #CSharp and the broader #DotNet ecosystem, with clear implications for #SoftwareArchitecture and #TypeSafety. Link for first comment:
To view or add a comment, sign in
-
𝐒𝐚𝐦𝐞 𝐋𝐨𝐠𝐢𝐜, 𝐃𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐭 𝐋𝐚𝐛𝐞𝐥𝐬: 𝟏. 𝐅𝐢𝐞𝐥𝐝𝐬 𝐯𝐬. 𝐕𝐚𝐫𝐢𝐚𝐛𝐥𝐞𝐬 The term variable is the "umbrella" term for any named storage in memory. However, where that storage lives changes its classification. 𝐅𝐢𝐞𝐥𝐝 (𝐓𝐡𝐞 𝐂𝐥𝐚𝐬𝐬 𝐌𝐞𝐦𝐛𝐞𝐫) • Definition: A variable that is declared directly inside a class. It defines the state or attributes of an object. • Scope: Accessible to all methods within the class. • Lifetime: Lives as long as the object exists (instance field) or as long as the application runs (static field). 𝐄𝐱𝐚𝐦𝐩𝐥𝐞: In class Car { String color; }, color is a field. 𝐕𝐚𝐫𝐢𝐚𝐛𝐥𝐞 (𝐓𝐡𝐞 𝐋𝐨𝐜𝐚𝐥 𝐖𝐨𝐫𝐤𝐞𝐫) • Definition: Generally refers to local variables declared within a specific block or method. • Scope: Limited to the block where it is defined. • Lifetime: Created when the block starts and destroyed as soon as the block finishes. 𝐄𝐱𝐚𝐦𝐩𝐥𝐞: In void drive() { int speed = 60; }, speed is a local variable. 𝟐. 𝐌𝐞𝐭𝐡𝐨𝐝𝐬 𝐯𝐬. 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬 Both are reusable blocks of code, but the difference is all about ownership. 𝐌𝐞𝐭𝐡𝐨𝐝 (𝐓𝐡𝐞 𝐎𝐛𝐣𝐞𝐜𝐭’𝐬 𝐁𝐞𝐡𝐚𝐯𝐢𝐨𝐫) • Association: A function that "belongs" to a class or an object. • Context: It can access the object’s internal data (fields) using keywords like this or self. • Invocation: Must be called on an object or class (e.g., myCar.accelerate()). 𝐂𝐨𝐦𝐦𝐨𝐧 𝐢𝐧: Java, C#, and C++. 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧 (𝐓𝐡𝐞 𝐒𝐭𝐚𝐧𝐝𝐚𝐥𝐨𝐧𝐞 𝐋𝐨𝐠𝐢𝐜) • Association: A standalone block of code that exists independently of any class. • Context: It only knows what you pass into it through arguments. • Invocation: Called directly by its name (e.g., print()). 𝐂𝐨𝐦𝐦𝐨𝐧 𝐢𝐧: C, Python, and JavaScript. 𝐊𝐞𝐲 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲: If it’s inside a class, think 𝐅𝐢𝐞𝐥𝐝 and 𝐌𝐞𝐭𝐡𝐨𝐝. If it’s standalone or inside a block, think 𝐕𝐚𝐫𝐢𝐚𝐛𝐥𝐞 and 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧. A huge thank you to Syed Zabi Ulla sir, for the incredible mentorship and for providing the technical foundation to navigate these concepts with clarity. #Programming #CodingTips #Java #CPP #WebDevelopment #ComputerScience #TechLearning #Mentorship #PWIOI
To view or add a comment, sign in
-
The Ghost in the Machine: Why your Thread-Safe Code Can Be Orders of Magnitude Slower You probably know that two threads can interfere with each other without ever accessing the same variable. We master locks, semaphores, and concurrency. But there is a hardware concept that most of us ignore on a daily basis: False Sharing. The Problem: Cache Lines Processors do not read memory byte by byte. They read in blocks called Cache Lines, typically 64 bytes on modern processors. If you have two distinct variables (say, two counters A and B) that reside in the same Cache Line, the hardware faces a problem: Core 1 updates variable A; Core 2 wants to update variable B; Even though they are different variables, the cache coherence protocol (MESI) marks the entire line as invalid for Core 2, forcing a cache reload. The result? The execution pipelines of both cores stall for hundreds of cycles, with no mutex, no lock, creating a bottleneck where there should be pure parallelism. Why Does This Matter? In high-performance systems (trading, search engines, large-scale event processing), False Sharing is the silent killer of scalability. You add more CPU cores, but performance does not grow. Sometimes it regresses. How to Fix It? Java vs GoBoth languages solve the problem in opposite ways, and that difference says a lot about the philosophy of each. Java handles it for you. Since Java 8, there is the @Contended annotation (package jdk.internal.vm.annotation). It instructs the JVM to add padding around the field, ensuring it occupies an exclusive Cache Line. Important detail: to work outside JDK code, you must add the flag -XX:-RestrictContended to the JVM. Without it, the annotation has no effect on user classes. Go makes it your responsibility. There is no magic annotation. The compiler will not save you. You need to understand the hardware and insert the padding yourself, either manually with a byte array or using cpu.CacheLinePad from the standard library, which is more readable and avoids hardcoded numbers. Java uses @Contended, the JVM manages it, requires -XX:-RestrictContended, not explicit in code, around 128 bytes overhead per field. Go uses cpu.CacheLinePad, you manage it, no extra config needed, explicit in code, around 64 bytes overhead per field. The Takeaway Software is not just logic. It is understanding how that logic behaves when it meets the silicon. In Java, the platform abstracts the problem away. In Go, it sits right there in the code, a constant reminder that real concurrency requires thinking beyond the language. Have you ever debugged a performance problem that made no sense in the code, but made perfect sense in the hardware? #FalseSharing #CacheLines #ConcurrentProgramming #Java #Golang #HighPerformance #BackendDevelopment #SoftwareEngineering #SystemsProgramming #Programming
To view or add a comment, sign in
-
-
Why Go is not optimized for recursion (and why that’s intentional) I was going through some basics of Go and noticed something interesting, unlike C++ or even Java to some extent, Go is clearly not designed with recursion as a first class citizen. At first glance, this feels odd.But once you go deeper into how Go handles function calls, stacks, and execution, it actually makes a lot of sense. Let’s break the “why” 1. Every function call = new stack frame In Go, every function call creates a new stack frame That includes: -parameters -local variables -return addresses Now in recursion, you're calling the same function again and again, which means: > stack frames keep piling up > memory usage grows linearly with recursion depth So if you write: func f(n int) int { if n == 0 { return 0 } return f(n-1) } Execution looks like: f(5) |_ f(4) |_ f(3) |_ f(2) |_ f(1) |_f(0) And the stack becomes: | f(0) | | f(1) | | f(2) | | f(3) | | f(4) | | f(5) | Unlike some languages, Go doesn’t try to “optimize away” these frames. 2. No guaranteed Tail Call Optimization (TCO) This is the biggest one. In languages that support Tail Call Optimization, something like: return f(x-1) can reuse the current stack frame instead of creating a new one. But in Go: > TCO is not guaranteed > Compiler does not eliminate stack frames for tail calls Why? Because Go prioritizes: -predictable stack traces -debuggability -simplicity of compiler design 3. Goroutine stacks are small… but grow dynamically This is where things get interesting. > Goroutines start with very small stacks (~2KB) > Stack grows dynamically as needed As recursion grows: [ 2KB ] > [ 4KB ] > [ 8KB ] > [ 16KB ] ... Sounds great, right? Not entirely. Stack growth involves: - allocation - copying existing stack data - pointer adjustments > Which adds overhead > And recursion triggers this growth more aggressively So deep recursion = multiple stack resizes = performance hit So what does this mean practically? > Recursion in Go is fine for shallow problems > But for deep recursion, prefer iteration Especially in: - DFS on large graphs - tree traversals - anything unbounded
To view or add a comment, sign in
-
-
🦾 The Power of ForkJoin in Java When dealing with massive datasets or computationally heavy tasks, sequential processing is often the bottleneck. That’s where the ForkJoin Framework shines, implementing a "Divide and Conquer" strategy at the hardware level. Here is how it overcomes common parallelism challenges: 1. Efficient Resource Allocation (Work-Stealing) This is the "secret sauce." In a typical thread pool, if one thread finishes its tasks, it sits idle while others might be overwhelmed. In a ForkJoinPool, idle threads "steal" work from the back of the deques of busy threads. This ensures all CPU cores are consistently utilized. 2. Solving the "Divide and Conquer" Complexity Managing recursion and thread synchronization manually is error-prone. ForkJoin provides a structured way to: Fork: Split a large task into smaller, independent sub-tasks. Join: Wait for the sub-tasks to finish and combine their results. 3. Lightweight Task Management Unlike standard OS threads, ForkJoin tasks (like RecursiveTask or RecursiveAction) are extremely lightweight. You can run millions of these tasks within a much smaller pool of actual worker threads without the overhead of context switching. When should you use it? Recursive Problems: Like sorting large arrays (Parallel Sort) or processing complex tree structures. CPU-Intensive Work: When you have a lot of data and enough cores to handle it in parallel. Large Collections: When a simple for loop is no longer meeting your SLA. Pro-tip: For most everyday tasks, Java's parallelStream() uses a common ForkJoinPool under the hood. However, for specialized heavy-lifting, creating your own ForkJoinPool gives you much finer control over parallelism levels. #Java #Multithreading #ParallelComputing #Backend #SoftwareEngineering #Performance #Concurrency
To view or add a comment, sign in
-
-
𝗝𝗗𝗞 𝘃𝘀 𝗝𝗥𝗘 𝘃𝘀 𝗝𝗩𝗠 Here's what actually happens when you run a Java program — and the parts most engineers never learn: JDK → JRE → JVM → JIT 𝗝𝗗𝗞 (Java Development Kit) Your complete toolbox. Compiler (javac), debugger, profiler, keytool, jshell — and a bundled JRE. Without it, you can't write or compile Java. Just run it. 𝗝𝗥𝗘 (Java Runtime Environment) JVM + standard class libraries. Ships to end users. No compiler. No dev tools. Just enough to run a .jar. 𝗝𝗩𝗠 (Java Virtual Machine) This is where it gets interesting. 𝗧𝗵𝗿𝗲𝗲 𝗹𝗮𝘆𝗲𝗿𝘀 𝗶𝗻𝘀𝗶𝗱𝗲: • Class Loader — loads, links, and initializes .class files at runtime (not all at startup) • Runtime Data Areas — Heap, Stack, Method Area, PC Register, Native Method Stack • Execution Engine — interprets + compiles bytecode 𝗝𝗜𝗧 (Just-In-Time Compiler) Watches your code at runtime. Identifies "hot" methods — those called frequently. Compiles them natively. Skips the interpreter next time. That's how Java catches up to C++ performance on long-running workloads. 𝗪𝗵𝗮𝘁 𝗺𝗼𝘀𝘁 𝗱𝗲𝘃𝘀 𝗺𝗶𝘀𝘀 • 𝗖𝗹𝗮𝘀𝘀 𝗟𝗼𝗮𝗱𝗶𝗻𝗴 𝗶𝘀 𝗹𝗮𝘇𝘆 The JVM doesn't load all classes upfront. It loads them on first use — which is why cold start time differs from steady-state throughput. • 𝗝𝗜𝗧 𝗵𝗮𝘀 𝘁𝗶𝗲𝗿𝘀 HotSpot JVM uses tiered compilation: C1 (fast, light optimization) kicks in first, then C2 (aggressive optimization) takes over for truly hot code. GraalVM replaces C2 entirely with a more powerful compiler. • 𝗧𝗵𝗲 𝗛𝗲𝗮𝗽 𝗶𝘀 𝗻𝗼𝘁 𝗼𝗻𝗲 𝘁𝗵𝗶𝗻𝗴 It's split: Eden → Survivor Spaces → Old Gen → Metaspace (post Java 8). Understanding this is prerequisite to tuning GC and fixing OOM errors. • 𝗦𝘁𝗮𝗰𝗸 𝘃𝘀 𝗛𝗲𝗮𝗽 — 𝗿𝘂𝗻𝘁𝗶𝗺𝗲 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿 Every thread gets its own stack. Local primitives live there. Objects always go to the heap. References live on the stack. This is why stack overflows (deep recursion) and heap OOMs are completely different problems. • 𝗝𝗩𝗠 𝗶𝘀 𝗻𝗼𝘁 𝗮𝗹𝘄𝗮𝘆𝘀 𝗶𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗲𝗱 When GraalVM's native-image compiles your app ahead-of-time (AOT), there's no JVM at runtime at all. Instant startup. Fixed memory footprint. Different trade-offs. • 𝗕𝘆𝘁𝗲𝗰𝗼𝗱𝗲 𝗶𝘀 𝗻𝗼𝘁 𝗯𝗶𝗻𝗮𝗿𝘆 It's an intermediate representation — platform-agnostic instructions the JVM can run on any OS. This is Java's "write once, run anywhere" in practice, not just in theory. 𝗧𝗵𝗲 𝗺𝗲𝗻𝘁𝗮𝗹 𝗺𝗼𝗱𝗲𝗹 𝘁𝗵𝗮𝘁 𝗰𝗹𝗶𝗰𝗸𝘀: • JDK = write + compile + run • JRE = run only • JVM = execution environment • JIT = runtime optimizer What's the JVM internals detail that surprised you most when you first learned it? A special thanks to my faculty Syed Zabi Ulla sir at PW Institute of Innovation for their clear explanations and continuous guidance throughout this topic. #Java #JVM #SoftwareEngineering #BackendDevelopment #ProgrammingFundamentals
To view or add a comment, sign in
-
🚀 ArrayDeque — Simplifying Stack and Queue Logic ( https://lnkd.in/g-c6q8v6 ) ➡️ Array Deque (Array Double-Ended Queue) is a resizable-array class in Java that lets you insert and remove elements from both ends — making it one of the most flexible data structures in the Collections Framework. 🔹 Revolving Door: Just like a revolving door lets people enter and exit from either side, ArrayDeque lets you add or remove elements from both the front and the rear with equal ease. 🔹 Token Queue at a Bank: Imagine a bank where the manager can add urgent customers at the front AND regular customers at the back — that's exactly how ArrayDeque manages its double-ended insertions. 🔹 A Stack of Trays in a Cafeteria: You always pick the top tray and place new ones on top — ArrayDeque replicates this Stack (LIFO) behavior perfectly using push() and pop(). Here are the key takeaways from the ArrayDeque session at TAP Academy by Sharath R sir: 🔹 No Indexing, No for Loop: Unlike ArrayList, ArrayDeque has zero indexing support. You cannot use a traditional for loop or get(i) — you must use for-each, Iterator, or descendingIterator instead. 🔹 Null is Strictly Forbidden: ArrayDeque throws a NullPointerException the moment you try to insert null — a critical difference from ArrayList and LinkedList that interviewers love to test. 🔹 Smarter Resizing: When the default capacity of 16 fills up, ArrayDeque doubles itself (n × 2). ArrayList uses (n × 3/2) + 1 — two different formulas worth remembering cold. 🔹 Reverse Traversal via descendingIterator(): Since ListIterator is unavailable (ArrayDeque implements Deque, not List), the only way to traverse backward is using descendingIterator() — which starts at the last element and moves toward the front. 🔹 One Class, Three Roles: ArrayDeque can act as a Stack (push/pop), a Queue (offer/poll), or a full Deque (addFirst/addLast) — making it the most versatile tool in Java Collections. Visit this Interactive webpage to understand the concept by visualization: https://lnkd.in/g-c6q8v6 #Java #JavaDeveloper #Collections #ArrayDeque #DataStructures #TAPAcademy #CodingJourney #PlacementPrep #SoftwareEngineering #InterviewPrep
To view or add a comment, sign in
-
-
🔹 Sorting isn’t always about values — sometimes it’s about bit patterns. A powerful yet underrated technique: 👉 Sort numbers based on the number of 1’s (set bits) in their binary representation This pattern appears in real-world problem solving like: • Bitmasking problems • Subset generation • Optimization problems where “active bits” matter 👉 Example: 3 → 11 → 2 set bits 5 → 101 → 2 set bits 8 → 1000 → 1 set bit ✅ Sorted (by set bits, then value): 👉 [8, 3, 5] 💻 Java Implementation: Arrays.sort(arr, (a, b) -> { int countA = Integer.bitCount(a); int countB = Integer.bitCount(b); if (countA != countB) { return countA - countB; } return a - b; }); ⚡ Time Complexity Insight: • Sorting → O(n log n) • bitCount() → O(1) (hardware optimized) 👉 Overall complexity remains O(n log n) 🧠 Optimization Trick (Interview Ready): Avoid recomputing bit counts for large arrays: Map<Integer, Integer> map = new HashMap<>(); for (int num : arr) { map.put(num, Integer.bitCount(num)); } Arrays.sort(arr, (a, b) -> { if (!map.get(a).equals(map.get(b))) { return map.get(a) - map.get(b); } return a - b; }); 🧠 Deeper Insight: This follows the classic pattern: 👉 Decorate → Sort → Undecorate (Schwartzian Transform) • Attach metadata (bit count) • Sort using metadata • Retrieve original values ⚠️ Edge Cases to Think About: • Negative numbers (2’s complement representation) • Duplicate values • Sorting stability 🧠 Alternative Bit Trick: n = n & (n - 1); 👉 Removes the lowest set bit 👉 Useful when bitCount() is unavailable 📌 Key Takeaway: Good engineers don’t just sort data — they define what “sorted” really means based on the problem. 💬 Have you used custom comparators like this in real-world systems or interviews? #bitmanipulation #java #dsa #problemSolving #backenddeveloper #codinginterview #softwareengineering
To view or add a comment, sign in
-
. 🚀 Day 2 — First & Last Index of Repeating Characters ✅ Problem Find the first and last occurrence index of each repeating character in a string. import java.util.*; import java.util.stream.*; class FirstLastIndexFinder { public static void main(String[] args) { String input = "Programming"; Map<Character, List<Integer>> map = IntStream.range(0, input.length()) .boxed() .collect(Collectors.groupingBy( i -> input.charAt(i), LinkedHashMap::new, Collectors.toList() )); map.entrySet().stream() .filter(entry -> entry.getValue().size() > 1) .forEach(entry -> { List<Integer> indexes = entry.getValue(); System.out.println( "Key: " + entry.getKey() + ", First Index: " + indexes.get(0) + ", Last Index: " + indexes.get(indexes.size() - 1) ); }); } } 💡 Explanation 👉 Step-by-step: Use IntStream.range() to iterate over indexes Group indexes by character using groupingBy() Use LinkedHashMap to maintain insertion order Filter characters that appear more than once Get: First index → indexes.get(0) Last index → indexes.get(size - 1) ✅ Output Key: r, First Index: 1, Last Index: 9 Key: g, First Index: 3, Last Index: 10 Key: m, First Index: 6, Last Index: 7 Happy Learning!!!
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development