𝙃𝙤𝙬 𝙅𝙑𝙈 𝙈𝙚𝙢𝙤𝙧𝙮 𝙒𝙤𝙧𝙠𝙨 (𝙃𝙚𝙖𝙥, 𝙎𝙩𝙖𝙘𝙠, 𝙈𝙚𝙩𝙖𝙎𝙥𝙖𝙘𝙚) — 𝙎𝙞𝙢𝙥𝙡𝙚 𝙀𝙭𝙥𝙡𝙖𝙣𝙖𝙩𝙞𝙤𝙣 Understanding JVM memory is a must for every Java backend developer. Almost every interviewer asks this in some form. Here’s the simplest explanation 👇 1️⃣ JVM divides memory into two main parts: Heap and Stack ✅ Stack Memory Used for: method calls local variables function arguments Fast and automatically cleaned up when a method finishes. Example: int x = 10; // stored in stack Each thread has its own stack. --- 2️⃣ Heap Memory (big area) Used for storing objects: User u = new User(); // stored in heap Heap is shared by all threads. Garbage Collector cleans the heap. Heap is divided into: --- 2.1 Young Generation (new objects) Eden → new objects created here Survivor S0/S1 → objects that survive a few GC cycles Frequent “minor GC” happens here. --- 2.2 Old Generation (long-lived objects) If an object stays alive long enough, it moves here. GC here is called “major GC”. --- 3️⃣ MetaSpace (stores class information) This is where JVM stores: class definitions methods metadata Replaced “PermGen” after Java 8. It grows automatically based on need. --- 4️⃣ Simple Flow 1. Program starts 2. Methods → stack 3. Objects → heap 4. Class info → metaspace 5. Garbage Collector removes unused heap objects 💡 In short Stack → method-level data, very fast Heap → objects, cleaned by GC Young Gen → new objects Old Gen → long-living objects MetaSpace → class and metadata Once you understand this, debugging memory leaks, GC issues, and OutOfMemory errors becomes much easier. #Java #JVM #BackendDevelopment #InterviewPrep #JavaInternals #CleanCode #CodingConcepts #techieanky #javainterview #prepration #concept
JVM Memory Basics: Heap, Stack, MetaSpace
More Relevant Posts
-
🧠Array Memory Model & Random Access Mechanism ✅ An array is an object and is always stored in heap memory. 🔑 Contiguous Memory Allocation JVM looks for a continuous block of memory Size required = (size of data type × number of elements) Example: int[] arr = new int[5]; int = 4 bytes Total = 5 × 4 = 20 bytes JVM allocates 20 continuous bytes in heap 3️⃣ Address vs Index vs Value (VERY IMPORTANT) They are three different things 👇 Address ----- Actual memory location (hidden from Java developer) IndexPosition ----- number (0, 1, 2, …) Value ----- Data stored at that position 📌 You never see addresses in Java 📌 You only work with indexes 4️⃣ How does JVM access arr[i] so fast? This is the key concept you’re describing 👇 🔑 Direct Address Calculation (Random Access) Internally JVM does something like: address = baseAddress + (index × sizeOfDataType) Example: arr[3] JVM knows base address of arr Knows int = 4 bytes Calculates: base + (3 × 4) 👉 No loop, no search, no traversal 👉 Direct jump to memory location 📌 This is why arrays are very fast for reading 5️⃣ Why arrays are fast for READ but slow for WRITE? ✅ Fast Read Because of random access JVM jumps directly to the address ❌ Slow Insert / Delete Size is fixed To insert in middle: Shift elements Adjust values Cannot resize memory 📌 This is why arrays are rigid 6️⃣ Can we insert into an array? ❌ No. Arrays are fixed size Once created: new int[5]; Size = 5 forever JVM cannot resize that memory block 👉 Any “insertion” means: Create a new array Copy old values Add new value 📌 This is expensive 🧠 Java Memory — Objects & References (Quick Clarity) 👉 Objects (including arrays) are stored in the Heap 🧱 👉 Reference variables are stored in the Stack 📍 👉 The reference variable stores the base address of the object, ✔️ not the value ✔️ not index 0 ✔️ but the starting address of the array object itself 👉 Inside memory, each address holds both location + actual value 📦 🔖 Frontlines EduTech (FLM) #Java #JVM #HeapMemory #StackMemory #ProgrammingConcepts #JavaInternals
To view or add a comment, sign in
-
-
☕ 𝗛𝗼𝘄 𝗝𝗮𝘃𝗮 𝗖𝗼𝗺𝗽𝗶𝗹𝗲𝘀 𝗦𝗼𝘂𝗿𝗰𝗲 𝗖𝗼𝗱𝗲 𝘁𝗼 𝗕𝘆𝘁𝗲𝗰𝗼𝗱𝗲 Ever wondered what happens after you write Java code? Let’s break it down step by step 👇 🧑💻 𝗔𝘁 𝗖𝗼𝗺𝗽𝗶𝗹𝗲 𝗧𝗶𝗺𝗲 1️⃣ 𝗪𝗿𝗶𝘁𝗲 𝗖𝗼𝗱𝗲 You write Java code in a .java file using classes, methods, and objects. 2️⃣ 𝗟𝗲𝘅𝗶𝗰𝗮𝗹 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 The compiler (javac) scans the source and converts it into tokens ➡️ keywords, identifiers, literals, symbols. 3️⃣ 𝗦𝘆𝗻𝘁𝗮𝘅 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 Checks if the code follows Java grammar rules and builds a Parse Tree 🌳 4️⃣ 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 Validates data types, variable declarations, and rule correctness ➡️ catches type mismatches and invalid references. 5️⃣ 𝗕𝘆𝘁𝗲𝗰𝗼𝗱𝗲 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 Generates platform-independent bytecode stored in a .class file. 6️⃣ 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 Applies basic optimizations to improve execution efficiency ⚡ ⚙️ 𝗔𝘁 𝗥𝘂𝗻𝘁𝗶𝗺𝗲 (𝗜𝗻𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝗝𝗩𝗠) 7️⃣ 𝗖𝗹𝗮𝘀𝘀 𝗟𝗼𝗮𝗱𝗲𝗿 Loads .class files into memory. 8️⃣ 𝗕𝘆𝘁𝗲𝗰𝗼𝗱𝗲 𝗩𝗲𝗿𝗶𝗳𝗶𝗲𝗿 Ensures safety and prevents illegal or malicious operations 🔒 9️⃣ 𝗜𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗲𝗿 / 𝗝𝗜𝗧 𝗖𝗼𝗺𝗽𝗶𝗹𝗲𝗿 Converts bytecode into native machine code ➡️ JIT boosts performance by compiling hot code paths 🚀 ✅ 𝗧𝗵𝗲 𝗥𝗲𝘀𝘂𝗹𝘁 ✔️ Platform independence ✔️ Secure execution ✔️ Automatic memory management ✔️ Runtime performance optimization 𝗪𝗿𝗶𝘁𝗲 𝗼𝗻𝗰𝗲, 𝗿𝘂𝗻 𝗮𝗻𝘆𝘄𝗵𝗲𝗿𝗲 isn’t magic — it’s the JVM at work ☕💡 Which part of the Java compilation process did you first learn about? 👇 #Java #JVM #Bytecode #JavaInternals #SoftwareEngineering #BackendDevelopment
To view or add a comment, sign in
-
-
Most backend bugs don’t come from bad logic. They come from not understanding how Java actually runs your code. Here’s a simple one many developers miss. Heap and Stack are not the same thing. >Heap memory +Shared across all threads +Stores objects +Garbage Collector works here +Slow but flexible >Stack memory +One stack per thread +Stores method calls and local variables +No garbage collection +Extremely fast Now the real mistake I see: Developers see OutOfMemoryError and immediately blame the heap. Sometimes the heap is perfectly fine. >The issue is often: Deep recursion Too many nested method calls Large objects created inside loops per request That leads to StackOverflowError, not heap exhaustion. >In real backend systems, this matters because: Each incoming request runs on a thread Each thread has a fixed stack size One bad code path can crash threads under load If you want to write reliable backend systems, don’t just learn Java syntax. Learn how Java executes your code. Have you ever debugged a StackOverflowError in production? #Java #JVM #BackendDevelopment #JavaInternals #SoftwareEngineering #SystemThinking
To view or add a comment, sign in
-
-
🚨 𝗪𝗵𝘆 𝗼𝘃𝗲𝗿𝗿𝗶𝗱𝗶𝗻𝗴 𝗲𝗾𝘂𝗮𝗹𝘀() 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗵𝗮𝘀𝗵𝗖𝗼𝗱𝗲() 𝗯𝗿𝗲𝗮𝗸𝘀 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 This is one of those Java rules that feels academic… until it breaks your system at scale. 𝗧𝗵𝗲 𝗿𝘂𝗹𝗲 𝗶𝘀 𝘀𝗶𝗺𝗽𝗹𝗲: 𝗜𝗳 𝘁𝘄𝗼 𝗼𝗯𝗷𝗲𝗰𝘁𝘀 𝗮𝗿𝗲 𝗲𝗾𝘂𝗮𝗹 𝗮𝗰𝗰𝗼𝗿𝗱𝗶𝗻𝗴 𝘁𝗼 𝗲𝗾𝘂𝗮𝗹𝘀(), 𝘁𝗵𝗲𝘆 𝗠𝗨𝗦𝗧 𝗿𝗲𝘁𝘂𝗿𝗻 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗵𝗮𝘀𝗵𝗖𝗼𝗱𝗲(). Ignoring this doesn’t fail compilation. It fails 𝘀𝗶𝗹𝗲𝗻𝘁𝗹𝘆 — and that’s what makes it dangerous. 💥 𝗪𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗯𝗿𝗲𝗮𝗸𝘀 𝗶𝗻 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻? 🔹 𝗛𝗮𝘀𝗵-𝗯𝗮𝘀𝗲𝗱 𝗰𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻𝘀 𝗺𝗶𝘀𝗯𝗲𝗵𝗮𝘃𝗲 Classes like: • HashMap • HashSet • ConcurrentHashMap 𝗿𝗲𝗹𝘆 𝗼𝗻 𝗯𝗼𝘁𝗵 hashCode() and equals(). If you override only equals(): • Duplicate objects appear in HashSet • HashMap.get() suddenly returns null • Cache lookups fail randomly • Memory usage grows unexpectedly 🔹 𝗘𝘅𝗮𝗺𝗽𝗹𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗵𝗶𝗱𝗱𝗲𝗻 𝗯𝘂𝗴 You insert an object into a HashSet. Later, you check if it exists — and it doesn’t. Why? • equals() says they are equal • hashCode() sends them to 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗯𝘂𝗰𝗸𝗲𝘁𝘀 Result: 👉 The object exists… but can’t be found. 🔹 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗶𝘀 𝘄𝗼𝗿𝘀𝗲 𝗶𝗻 𝗿𝗲𝗮𝗹 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 • Caching layers break • Deduplication fails • Security checks behave inconsistently • Bugs appear 𝗼𝗻𝗹𝘆 𝘂𝗻𝗱𝗲𝗿 𝗹𝗼𝗮𝗱 • Debugging becomes a nightmare These issues are 𝗻𝗼𝗻-𝗱𝗲𝘁𝗲𝗿𝗺𝗶𝗻𝗶𝘀𝘁𝗶𝗰 — the worst kind. ✅ 𝗧𝗵𝗲 𝗰𝗼𝗿𝗿𝗲𝗰𝘁 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵 Whenever you override equals(): ✔ Always override hashCode() ✔ Use the same fields in both ✔ Keep them immutable if possible ✔ Use IDE or Lombok (@EqualsAndHashCode) carefully 🎯 𝗕𝗮𝗰𝗸𝗲𝗻𝗱 𝗿𝗲𝗮𝗹𝗶𝘁𝘆 𝗰𝗵𝗲𝗰𝗸 Most production bugs aren’t caused by complex logic. They’re caused by 𝗯𝗿𝗼𝗸𝗲𝗻 𝗰𝗼𝗻𝘁𝗿𝗮𝗰𝘁𝘀. And equals() / hashCode() is one of the most important contracts in Java. #Java #BackendDevelopment #SoftwareEngineering #JavaInternals #CleanCode #SystemDesign #InterviewPrep #SpringBoot #ProductionBugs
To view or add a comment, sign in
-
-
Same Time Complexity. Different Runtime. Here’s Why. Today I had a small but eye-opening realization. I was solving a problem where two Java solutions had the same time complexity (O(n²)), yet one consistently ran faster than the other on LeetCode. At first glance, this feels confusing. If Big-O is the same, shouldn’t performance be the same too? Turns out… not at all. Consider these two solutions Faster version: for (int[] rows : matrix) { for (int value : rows) { if (value < 0) { negativeCount++; value = -value; } sum += value; if (value < leastElement) { leastElement = value; } } } Slower version: for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { if (matrix[i][j] < 0) countNeg++; minimum = Math.min(minimum, Math.abs(matrix[i][j])); res += Math.abs(matrix[i][j]); } } What makes the first one faster? Even though both are O(n²), the constant factors matter: Math.abs() is a method call → more overhead Math.min() is another method call Repeated function calls inside tight loops slow things down The faster version: Avoids unnecessary method calls Uses simple condition checks and assignments Minimizes work inside the inner loop The takeaway Big-O tells you how an algorithm scales, but real performance depends on: Instruction count Branching Method calls Cache friendliness How much work you do per iteration This is where problem-solving starts turning into engineering. Writing a correct solution is step one. Writing an efficient one is step two. Understanding why it’s efficient is where growth really happens. Still learning. Still optimizing. #DSA #Java #ProblemSolving #PerformanceOptimization #CleanCode #LeetCode #SoftwareEngineering #LearningInPublic
To view or add a comment, sign in
-
🔷 Problem: 1351. Count Negative Numbers in a Sorted Matrix 🚀 Day 84 of #100DaysOfLeetCode 📘 Problem Summary: You are given a matrix grid of size m x n where: Each row is sorted in non-increasing order Each column is sorted in non-increasing order Your task is to count the number of negative numbers in the matrix. Example: grid = [ [ 4, 3, 2, -1], [ 3, 2, 1, -1], [ 1, 1, -1, -2], [-1, -1, -2, -3] ] Output: 8 🧠 Intuition: Brute force would check every element → O(m × n) ❌ But the matrix is sorted both row-wise and column-wise, which gives us a big optimization opportunity. Key insight: If grid[i][j] is negative, then everything to the right of it in that row is also negative. This allows us to apply Binary Search on each row. 🧩 Optimized Strategy (Binary Search per Row): For each row: Find the first negative number Count how many elements come after it This reduces the complexity significantly. ⏱️ Performance: ⚡ Runtime: 1 ms 📊 Beats: 100% of Java solutions 🧠 Memory: ~41 MB 📈 Complexity: Time Complexity: O(m log n) Space Complexity: O(1) 🌟 Key Takeaway: This problem highlights a core DSA principle: Sorted data should never be brute-forced. Whenever you see sorting properties: Think Binary Search Think Two Pointers Think pruning unnecessary work These small optimizations separate average solutions from great ones. Link:[https://lnkd.in/g8JbjY2a] 🏷️ Hashtags: #100DaysOfLeetCode #Day84 #Problem1351 #CountNegativeNumbers #BinarySearch #MatrixProblems #DSA #Java #LeetCode #ProblemSolving #CodingChallenge #InterviewPreparation #Algorithms #DataStructures #DeveloperJourney #ArjunInfoSolution #TechCareers #CareerGrowth #CodingJourney #JavaDeveloper
To view or add a comment, sign in
-
-
JVM - The Invisible Engine Behind Java’s Power 1️⃣ Class Loading - Bringing Code to Life - The JVM doesn’t load all classes at once. - It loads them on demand using the Class Loader Subsystem: Loading → Finds .class files Linking → Verifies bytecode, allocates memory, resolves references Initialization → Executes static blocks and initializes static fields - This lazy loading approach is what makes Java modular and memory efficient. 2️⃣ JVM Runtime Data Areas - Where Execution Really Happens - Once the code is loaded, JVM organizes memory into structured regions: Method Area - Class metadata, static variables, bytecode Heap - Objects + arrays (GC-managed) JVM Stack - Each thread gets its own stack frame PC Register - Tracks the current instruction Native Method Stack - Handles JNI calls - Understanding these areas is key for debugging performance and memory leaks. 3️⃣ Execution Engine - Turning Bytecode into Performance - The Execution Engine is the heartbeat of the JVM: Interpreter → Executes bytecode line-by-line JIT Compiler → Converts hot code paths into machine code for speed Garbage Collector → Frees unused objects automatically - This hybrid execution model is why Java apps can scale without compromising speed. 4️⃣ JVM = Portability + Security + Performance - Write Once, Run Anywhere - Automatic memory management - Dynamic class loading - Optimized runtime execution From startups to globally distributed systems, JVM remains one of the most battle tested runtime environments in modern backend engineering. Reference - https://lnkd.in/gNhUhBQg #Java #JVM #JavaDeveloper #BackendEngineering #SoftwareArchitecture #Programming #SystemDesign #Microservices #TechCommunity
To view or add a comment, sign in
-
Problem 2: Binary Tree Paths Level: Easy–Medium Platform: LeetCode 257 Return all root-to-leaf paths as strings like: Copy code "1->2->5" "1->3" 🔹 Java Code import java.util.*; class Solution { public List<String> binaryTreePaths(TreeNode root) { List<String> result = new ArrayList<>(); if (root == null) return result; dfs(root, "", result); return result; } private void dfs(TreeNode node, String path, List<String> result) { if (node == null) return; // add current node to path path = path + node.val; // leaf -> store path if (node.left == null && node.right == null) { result.add(path); return; } // continue path with "->" path = path + "->"; dfs(node.left, path, result); dfs(node.right, path, result); } } 🔹 Explanation Build path as a string while traversing When leaf reached → store full path No backtracking needed because we use new string each recursion call Time: O(n × path-length) Space: O(h) recursion #DSA #DataStructuresAndAlgorithms #Coding #Programmer #LeetCode #CodeEveryday #JavaDSA #CodingPractice #ProblemSolving #CP #CompetitiveProgramming #DailyCoding #TechJourney #CodingCommunity #DeveloperLife #100DaysOfCode #CodeWithMe #LearnToCode #GeekForGeeks #CodingMotivation
To view or add a comment, sign in
-
Problem: Given an integer array nums sorted in non-decreasing order, remove the duplicates in-place such that each unique element appears only once. The relative order of the elements should be kept the same. Consider the number of unique elements in nums to be k. After removing duplicates, return the number of unique elements k. The first k elements of nums should contain the unique numbers in sorted order. The remaining elements beyond index k - 1 can be ignored. Idea: Given that array is sorted in increasing order and to main the relative order of array, All we need to do is that take two pointers adjacent pointer at start where compare left to right using second pointer with first pointer and check if elements at first and second aren't same, then increase first pointer and replace element of 1st pointer with second pointer and return first +1; class Main { public static void main(String[] args) { System.out.println("Try programiz.pro"); int[] nums={0,0,1,2,3,3,3,3,5,6}; int unqiueCount = removeDuplicates(nums); System.out.println("Total unqiue element after removing duplicates are: "+ unqiueCount); } static int removeDuplicates(int[] nums) { int first = 0; for(int second = 1; second<nums.length; second ++){ if(nums[first]!=nums[second]){ first++; nums[first]=nums[second]; } } return first+1; } } #java8 #corejava #programming #practice #java #code #leetcodeproblem #problemsolving
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development