Java Method Overloading I was revising notes on method overloading, and it reminded me how easy it is to memorize definitions… but miss the real mechanics behind it. Let’s break it down in a way that actually sticks What the Compiler Actually Uses When Java resolves an overloaded method, it ONLY looks at: ✔️ Method name ✔️ Number of parameters ✔️ Data types of parameters ✔️ Order (sequence) of parameters This combination is called the method signature ❌ Return type is completely ignored What is “Overload Resolution”? It’s the process where the compiler decides which method to call from multiple overloaded methods. Important: This decision happens at compile time, not runtime That’s why method overloading is also called: Compile-time polymorphism Static polymorphism Early binding Static binding Real Understanding (From Notes → Reality) “Compiler binds method call with method body during compilation” Let’s make that practical: void add(int x, int y) { } void add(int x, float y) { } void add(float x, float y) { } add(10.5f, 20.5f); 👉 Compiler instantly picks: add(float, float) ✔️ Decision made at compile time ✔️ Execution happens later at runtime ⚡ Where Most People Go Wrong Many think: “Return type helps differentiate methods” ❌ Wrong. int add(int a, int b) { return 0; } double add(int a, int b) { return 0; } // ❌ Error 👉 Same signature → Compilation Error The Hidden Rule When multiple methods match, Java follows priority: 1️⃣ Exact match 2️⃣ Widening 3️⃣ Autoboxing 4️⃣ Varargs If two methods fall at same level → ❌ Compilation Error The Illusion “It creates an illusion that one method performs multiple activities” In reality: Methods are different Only the name is same Each method handles a specific case Overloading improves readability, not magic Reference For deeper understanding of invalid cases: 🔗 https://lnkd.in/gD3W_efG Thanks to PW Institute of Innovation and my mentor Syed Zabi Ulla sir for helping me truly understand how Java thinks under the hood. Your guidance made these concepts much clearer and interview-ready. 🚨 One-Line Truth Method overloading is not about flexibility at runtime — it’s about clarity and compile-time precision #Java #Programming #SoftwareEngineering #CodingInterview #FAANG #JavaDeveloper #TechLearning
Java Method Overloading: Compiler Resolution Rules
More Relevant Posts
-
Java- Automatic Type Promotion of Primitives I am trying to explore and explain the concept of automatic type promotion of primitives using a simple code in java using two byte values: class TestAutomatictypepromotion{ public static void main(String[] ar){ byte a=10; byte b=20; int sum=a+b; System.out.println(sum); }} Perfect! Let me prove the byte → int promotion step by step through actual bytecode analysis.We have the real bytecode. Now let me build the full visual explanation. javac your souce code javap -c TestAutomatictypepromotion.class (we will get the vyte code) The 3 smoking-gun proofs from the actual bytecode Here is the raw javap -c output from your exact code, with the proof highlighted: 0: bipush 10 ← pushes 10 as INT (not byte) 2: istore_1 ← "i" = integer store (no bstore exists!) 3: bipush 20 ← pushes 20 as INT 5: istore_2 ← integer store 6: iload_1 ← "i" = integer load (no bload exists!) 7: iload_2 ← integer load 8: iadd ← "i" = INTEGER add ← THE KEY PROOF 9: istore_3 Proof 1 — iload, not bload: When a and b are loaded from local variable slots, the opcode is iload_1 and iload_2. The i prefix means integer. There is literally no bload instruction in the entire JVM specification. Proof 2 — iadd, not badd: The addition uses iadd. There is no badd opcode. The JVM arithmetic instruction set only has iadd, ladd, fadd, dadd (int, long, float, double). Bytes have no dedicated add — they must become ints first. Proof 3 — println:(I)V: The method descriptor in constant pool entry #13 is println:(I)V. The I is the JVM type descriptor for int. So even println receives an int, not a byte. Why does Java do this? The JVM's operand stack and local variable slots work natively in 32-bit units. Byte, short, char, and boolean values are all widened to int the moment they enter the stack — this is called numeric promotion. It's defined in the JLS §5.6.1 (Unary Numeric Promotion) and §5.6.2 (Binary Numeric Promotion). The JVM spec simply has no byte-level arithmetic opcodes — they were intentionally omitted to keep the instruction set small and the stack 32-bit aligned. Want me to show what happens when the result overflows a byte, or demonstrate why byte c = a + b causes a compile-time error without an explicit cast: Let me cover both — byte overflow AND the compile-time error proof through actual bytecode! byte data type has a range: -128 to 127 Assume we declare the variables llike : byte b=100.byte b=100; then try to add both (byte values) like byte c=a+b; The result :200 is not in byte range. So overflow happens. Compiler will not allow. The compiled and interpreted way in Java is the base for such standard code. Most developers fear the JVM. Java developers understand it. Codeest Software Factory Anirudh Mangore Sandip Magdum Mehvish Fansopkar Mitali Dere Sakshi Randive Shruti Chavan NILESH GHAVATE Shaikh Abdulkhadir Java Recruiting Group,OpenJDK
To view or add a comment, sign in
-
Small concept. Big impact. In Java: byte + byte = int That’s automatic type promotion — and it’s one of those things that silently causes bugs if you don’t fully understand it. Back to basics = better code.
Java- Automatic Type Promotion of Primitives I am trying to explore and explain the concept of automatic type promotion of primitives using a simple code in java using two byte values: class TestAutomatictypepromotion{ public static void main(String[] ar){ byte a=10; byte b=20; int sum=a+b; System.out.println(sum); }} Perfect! Let me prove the byte → int promotion step by step through actual bytecode analysis.We have the real bytecode. Now let me build the full visual explanation. javac your souce code javap -c TestAutomatictypepromotion.class (we will get the vyte code) The 3 smoking-gun proofs from the actual bytecode Here is the raw javap -c output from your exact code, with the proof highlighted: 0: bipush 10 ← pushes 10 as INT (not byte) 2: istore_1 ← "i" = integer store (no bstore exists!) 3: bipush 20 ← pushes 20 as INT 5: istore_2 ← integer store 6: iload_1 ← "i" = integer load (no bload exists!) 7: iload_2 ← integer load 8: iadd ← "i" = INTEGER add ← THE KEY PROOF 9: istore_3 Proof 1 — iload, not bload: When a and b are loaded from local variable slots, the opcode is iload_1 and iload_2. The i prefix means integer. There is literally no bload instruction in the entire JVM specification. Proof 2 — iadd, not badd: The addition uses iadd. There is no badd opcode. The JVM arithmetic instruction set only has iadd, ladd, fadd, dadd (int, long, float, double). Bytes have no dedicated add — they must become ints first. Proof 3 — println:(I)V: The method descriptor in constant pool entry #13 is println:(I)V. The I is the JVM type descriptor for int. So even println receives an int, not a byte. Why does Java do this? The JVM's operand stack and local variable slots work natively in 32-bit units. Byte, short, char, and boolean values are all widened to int the moment they enter the stack — this is called numeric promotion. It's defined in the JLS §5.6.1 (Unary Numeric Promotion) and §5.6.2 (Binary Numeric Promotion). The JVM spec simply has no byte-level arithmetic opcodes — they were intentionally omitted to keep the instruction set small and the stack 32-bit aligned. Want me to show what happens when the result overflows a byte, or demonstrate why byte c = a + b causes a compile-time error without an explicit cast: Let me cover both — byte overflow AND the compile-time error proof through actual bytecode! byte data type has a range: -128 to 127 Assume we declare the variables llike : byte b=100.byte b=100; then try to add both (byte values) like byte c=a+b; The result :200 is not in byte range. So overflow happens. Compiler will not allow. The compiled and interpreted way in Java is the base for such standard code. Most developers fear the JVM. Java developers understand it. Codeest Software Factory Anirudh Mangore Sandip Magdum Mehvish Fansopkar Mitali Dere Sakshi Randive Shruti Chavan NILESH GHAVATE Shaikh Abdulkhadir Java Recruiting Group,OpenJDK
To view or add a comment, sign in
-
Hi Everyone, I’m preparing for Java interviews, and while revising I thought of sharing some tricky questions in Exception Handling I came across. Hopefully, these help others too 1️⃣ What does Exception actually mean? An exception is an abnormal condition that occurs at runtime. Examples: File not found when program runs Database connection failure when program runs Null pointer access when program runs 👉 All exceptions occur at runtime, without exception. 2️⃣ Then why are Checked Exceptions called “compile‑time exceptions”? They are not thrown at compile time. They are checked by the compiler at compile time. ✅ The compiler forces you to handle them before running the program. Simple Example (Makes it crystal clear): FileReader fr = new FileReader("data.txt"); What the compiler thinks: “This file may or may not exist at runtime. Handle this possibility NOW.” So compiler gives error: Unhandled exception: FileNotFoundException ✅ Compiler is predicting risk, not observing runtime behavior. Correct Rule to Remember ⭐ Checked exceptions are “predictable risk conditions” that Java forces you to handle before execution. Unchecked exceptions are “programming bugs” that Java assumes you’ll fix by writing correct code. Why still use an unchecked exception? Because: This is still a programmer error Caller violated method contract Handling it won’t recover meaningfully 1️⃣ Why Custom Exceptions are Needed ✅ Real‑Time Scenario: Banking / Enterprise App You have layers: Controller Service DAO UI If something goes wrong in Service layer, you must inform upper layers properly, not just print. ✅ Using Custom Exception (Correct Design) if (balance < amount) { throw new InsufficientBalanceException("Insufficient balance"); } Why this is better ✅ Stops execution immediately Error travels up the call stack Caller must decide what to do Clean separation of logic & error handling Easier logging, monitoring, retry, rollback throw vs throws: 1️⃣ throw → Throwing the exception (inside method) if (amount > balance) { throw new InsufficientBalanceException("Not enough balance"); } ✅ Here: Exception is created Exception is thrown Happens inside method body 👉 throw is action 2️⃣ throws → Declaring responsibility (method signature) public void withdraw(int amount) throws InsufficientBalanceException { if (amount > balance) { throw new InsufficientBalanceException("Not enough balance"); } } ✅ Here: Method tells caller: “I might throw this exception. Be ready.” 👉 throws is warning/contract Caller's Responsibility 👇 try { account.withdraw(5000); } catch (InsufficientBalanceException e) { System.out.println(e.getMessage()); } ✅ Exception handled by caller
To view or add a comment, sign in
-
Not everything that looks equivalent on paper behaves the same in reality, especially at scale. Here’s a simple example to illustrate this: “Given a sorted array and a target value, return the index of the target if it exists, otherwise return -1.” This is the standard Binary Search problem. There are 2 clean ways to solve it in Java: 1. Iterative solution – Use a loop, keep narrowing the search space by updating left and right. 2. Recursive solution – At each step, call the function again on either the left half or the right half. Both are correct. Both run in O(log n). But which one actually performs better in Java? At first glance, they seem identical - they’re doing the same work and even take the same number of steps (~log n). But in practice, the iterative version usually wins. Why? 1️⃣ Every recursive call has a cost (CPU overhead) Each recursive step is a function call. That means the JVM has to: jump to a new method pass parameters (left, right) allocate a new stack frame return back after execution Even though each step is small, this overhead adds up across all calls. In the iterative version, all of this happens inside a single loop. ➡️ Same logic, but fewer method calls → less CPU work 2️⃣ Recursion uses extra memory (call stack) Every recursive call stores its state on the call stack: current bounds local variables like mid return information So memory usage grows with the depth of recursion (O(log n) here). Iteration reuses the same variables for every step. ➡️ Iteration uses constant memory (O(1)) 3️⃣ JVM + JIT optimizations favor loops Java uses a JIT (Just-In-Time) compiler that optimizes frequently executed (“hot”) code. Loops are predictable → easier to optimize (branching, bounds checks, etc.) Recursive calls still behave like method invocations → harder to fully optimize away The compiled code is stored in the JVM’s code cache, so hot loops become very efficient over time. ➡️ Iterative code aligns better with how the JVM optimizes execution 4️⃣ No tail-call optimization in Java In some languages, recursion can be internally converted into a loop (tail-call optimization). Java does not guarantee this, so every recursive step still: creates a new stack frame adds overhead ➡️ The cost of recursion remains 5️⃣ Simpler and safer execution model Iteration is easier to reason about at runtime: no deep call chains more predictable control flow ➡️ This matters as systems grow in complexity This isn’t just about binary search. Execution model matters. At scale, small differences become real issues: latency, memory, even stack overflows. I recently saw this in production where a recursive flow with large inputs hit a stack overflow. Same logic on paper. Very different behavior at runtime. #Java #JVM #PerformanceEngineering #Scalability #BackendEngineering
To view or add a comment, sign in
-
🚀 Java’s volatile Keyword — The Most Misunderstood Concept (Explained Like Real Life) If you’ve worked with multithreading, you’ve probably seen volatile… and thought: 👉 “It makes things thread-safe, right?” ❌ Not exactly. Let’s break it down in a way that actually sticks 👇 🏠 Real-Life Example: WhatsApp Status Problem Imagine: You update your WhatsApp status. But your friend still sees the old status for a while 😅 Why? 👉 Because their app is showing a cached version, not the latest one 🧠 Same Problem Happens in Java Threads Each thread has its own working memory (CPU cache) So if one thread updates a variable: 👉 Other threads may still see the old value 💥 This is called a visibility problem ⚡ What volatile Actually Does When you mark a variable as volatile: volatile boolean isRunning = true; 👉 You’re telling JVM: “Always read/write this variable directly from main memory” 📌 So What Problems Does It Solve? ✔️ Guarantees visibility ✔️ Prevents threads from using stale values ⚠️ But Here’s the Catch (Important for Interviews) 👉 volatile does NOT guarantee: ❌ Atomicity ❌ Thread safety for complex operations 💥 Classic Mistake Example java volatile int count = 0; count++; // Not safe ❌ Why? 👉 count++ is NOT a single operation It’s actually: 1️⃣ Read 2️⃣ Increment 3️⃣ Write Two threads can still mess this up 🧠 What Else Does volatile Do? (Deep Concept) 👉 It prevents instruction reordering Sounds complex? Let’s simplify 👇 🍳 Real-Life Analogy: Cooking Order Imagine making tea: 1️⃣ Boil water 2️⃣ Add tea leaves Now imagine someone reorders it: 👉 Add tea leaves first, then boil 😅 Program still runs… but result is wrong ⚙️ Same Happens in CPU Optimizations To improve performance, JVM/CPU may reorder instructions 👉 volatile prevents this for that variable 🔥 Most Important Use Case: Stop Thread Pattern volatile boolean running = true; while(running) { // do work } Another thread can safely do: java running = false; 👉 Without volatile, loop might NEVER stop 🧠 Interview Questions (Answered Simply) 👉 What problem does volatile solve? → Visibility + Ordering 👉 Is volatile thread-safe? → ❌ No (only for simple reads/writes) 👉 Difference between volatile & synchronized? | volatile | synchronized | |----------|-------------| | Visibility only | Visibility + Atomicity | | No locking | Uses locking | | Faster | Slower | 🎯 When Should You Use volatile? ✔️ Status flags (true/false) ✔️ Configuration updates ✔️ One writer, multiple readers ❌ Avoid for: - Counters - Banking logic - Complex shared state #Java #Multithreading #Volatile #Concurrency #BackendDevelopment #InterviewPrep #SoftwareEngineer
To view or add a comment, sign in
-
Every Java language construct imports a set of change drivers into the code that uses it. A non-static inner class imports the enclosing instance's entire driver set. A lambda imports only what it explicitly closes over. A `record` bounds the driver set to its declared components. A sealed interface with pattern matching bounds it to the contract. The choice between constructs is therefore a structural question, not a style one: which construct bounds the driver set to what the situation actually requires? I wrote an article applying this lens to Java 25. It walks through non-static inner classes, static nested classes, lambdas, method references, anonymous classes, records, sealed interfaces with pattern matching, `Optional`, `Result`, and `enum` — and identifies, for each, the structural situation where the construct is the right choice and the situations where a lighter alternative exists. The underlying principle is the Independent Variation Principle (IVP): a module's driver set should contain exactly the drivers its elements genuinely vary with — no more, no fewer. Java's evolution since Java 8 has been a series of additions that narrow the gap between what constructs force you to couple to and what the situation requires. Reading the language history through this lens makes the direction visible. https://lnkd.in/exxXMez4
To view or add a comment, sign in
-
...........🅾🅾🅿🆂 !!! 𝑷𝒍𝒂𝒕𝒇𝒐𝒓𝒎 卩卂尺 𝑺𝒊𝒎𝒓𝒂𝒏 𝙎𝙚 𝕄𝕦𝕝𝕒𝕜𝕒𝕥 🅷🆄🅸, but 🆁🅾🅱🆄🆂🆃 𝔸𝕣𝕦𝕟 nikla D͓̽i͓̽l͓̽ ka 🅳🆈🅽🅰🅼🅸🅲......!!!.............. Guys you must be wondering, what nonsense things am I writing...."kuch shaayar likhna hai toa kaahi aur likh, linkedin pe kiyu"??? But guess what.....the above phrase represents features of java: 🅾🅾🅿🆂:- 𝗢𝗯𝗷𝗲𝗰𝘁 𝗢𝗿𝗶𝗲𝗻𝘁𝗲𝗱 𝗣𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 ....'S' is just a connect letter...don't consider it... 𝑷𝒍𝒂𝒕𝒇𝒐𝒓𝒎:- 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗶𝗻𝗱𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝘁.....java apps doesn't need to be recoded if you change the operating system😇😇😇 卩卂尺:- the word "par" sounds similiar to "por" and you can then call it 𝗣𝗼𝗿𝘁𝗮𝗯𝗹𝗲...Definitely platform independence makes java portable 𝑺𝒊𝒎𝒓𝒂𝒏:- Either you can say Simran sounds similiar to simple, hence 𝗦𝗶𝗺𝗽𝗹𝗲 is another feature....or say Simran is a very 𝗦𝗶𝗺𝗽𝗹𝗲 girl... 𝕄𝕦𝕝𝕒𝕜𝕒𝕥:- To say Mulakat, you need to say "Mul"...and at the end you are also using a "t"......guess it guess it.....yes it is 𝑴𝒖𝒍𝒕𝒊 𝑻𝒉𝒓𝒆𝒂𝒅𝒊𝒏𝒈....you will love smaller tasks in your programs into individual threads and then executing them concurrently to save your time.... 🅷🆄🅸:- doesn't "Hui" sound almost similiar to "high" I know there is a lot difference but say you are requiring same energy....just you can say "Hui" se 𝙃𝙞𝙜𝙝 𝙋𝙚𝙧𝙛𝙤𝙧𝙢𝙖𝙣𝙘𝙚.....ofcourse java gives a High level of performance as it is 𝑱𝒖𝒔𝒕 𝒊𝒏 𝒕𝒊𝒎𝒆 𝒄𝒐𝒎𝒑𝒊𝒍𝒆𝒅.... 🆁🅾🅱🆄🆂🆃:- Yes ofcourse java is 𝗥𝗼𝗯𝘂𝘀𝘁 because of its strong memory management..... 𝔸𝕣𝕦𝕟:- Arun contains "A" and "N".....Arun se 𝘼𝙧𝙘𝙝𝙞𝙩𝙚𝙘𝙩𝙪𝙧𝙖𝙡 𝙉𝙚𝙪𝙩𝙧𝙖𝙡....right??? Size of all data types in java is same for both 32 bit compiler as well as 64 bit compiler D͓̽i͓̽l͓̽ :- "Dil" had "DI" and "DI" se 𝗗𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗲𝗱...java Applications can be distributed and run at the same time on diff computers in same network 🅳🆈🅽🅰🅼🅸🅲:- Yes Java is also 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 due to it's Dynamic class loading feature.... Just repeat the above phrase 2 to 3 times and you will be ablte to retain all the features of java untill you take your last breath.......100% guarantee....
To view or add a comment, sign in
-
🚨 Java Records: Core Mechanics Most Developers Miss After understanding why records exist, the next step is more important: How do records actually behave under the hood? Because this is where most misconceptions start. 🧠 First: Records are NOT just “shorter classes.” They are a language-level construct with strict rules. When you write: public record User(Long id, String name) {} Java doesn’t “reduce boilerplate”… 👉 It generates a fully-defined, immutable data structure 🔍 What the compiler actually creates Behind the scenes, this becomes: private final fields A canonical constructor (all fields required) Accessor methods equals(), hashCode(), toString() Everything is tied to the data itself, not object identity. ⚠️ Common mistake: “Records don’t have getters.” Not true. They DO have accessors — just not JavaBean style. Instead of: getId() You get: id() 👉 This follows a different philosophy: “State is the API” 🔒 Immutability is enforced — not optional In a record: Fields are always final No setters allowed Object must be fully initialized There is no way to create a “half-filled” object. 🚫 No default constructor (and that’s intentional) Unlike normal classes: ❌ No no-arg constructor ✅ Only canonical constructor (all fields) This enforces: Every record instance is valid at creation time 🔥 Constructor behavior (important) You can customize construction — but with rules. Example: public record User(Long id, String name) { public User { if (id == null) { throw new IllegalArgumentException("id cannot be null"); } } } 👉 This is a compact constructor You can: ✔ Add validation ✔ Normalize data ✔ Add logic But you cannot: ❌ Skip field initialization ❌ Break immutability ⚖️ Records vs Lombok (under the hood mindset) Lombok → generates code you could have written Records → enforce rules you cannot bypass That’s a huge difference. 🧩 Subtle but critical behavior Records use: Value-based equality That means: new User(1L, "A").equals(new User(1L, "A")) // true 👉 Equality is based on data, not memory reference. 🧠 Why this matters in real systems Because records eliminate: Partial object states Hidden mutations Inconsistent equality logic They give you: ✔ Predictable behavior ✔ Safer concurrency ✔ Cleaner APIs 🚨 One key takeaway Records don’t just reduce code… They change how objects behave fundamentally If you still treat records like normal POJOs, You’ll miss the guarantees they provide. #Java #JavaRecords #BackendDevelopment #SpringBoot #SystemDesign #SoftwareEngineering #JavaDeveloper #CleanCode #Concurrency #Programming
To view or add a comment, sign in
-
Why Creating Too Many Threads in Java Degrades Performance (And How ExecutorService Solves It) While learning multithreading in Java, I initially assumed that increasing the number of threads would improve performance. However, practical analysis shows that excessive thread creation leads to the opposite effect. Here are the key insights with real-world numbers: CPU Parallelism is Limited Consider a typical system: 4 to 8 CPU cores Even if you create: 1000 threads Only: 4–8 threads can execute simultaneously The remaining threads are scheduled and kept waiting. Example: If each task takes 2 seconds: Sequential (1 thread, 1000 tasks) → ~2000 seconds With 8 cores → theoretical minimum ≈ 250 seconds Context Switching Overhead The operating system switches between threads rapidly. Each switch ≈ 1–10 microseconds With hundreds of threads → thousands of switches per second Result: CPU spends more time switching than doing useful work. Memory Consumption Per Thread Each thread has its own stack: ~512 KB to 1 MB per thread Example: 1000 threads → ~500 MB to 1 GB memory usage This can lead to: OutOfMemoryError: unable to create new native thread System Instability Under Load Too many threads can cause: High garbage collection pressure Scheduling delays Application slowdown or crash How Java Solves This: ExecutorService (Thread Pools) Instead of creating a new thread for every task, Java provides the Executor framework. What ExecutorService does: Fixed Number of Threads You define a limit (e.g., 10 threads), and only those threads are created. Task Queue Submitted tasks are placed in a queue. Example: 1000 tasks → 10 running, 990 waiting. Worker Threads Each thread picks a task, executes it, and then picks the next one. Threads are reused, not recreated. Reduced Context Switching Fewer threads → fewer switches → better CPU efficiency. Controlled Memory Usage Limited threads → stable memory usage. Example: ExecutorService executor = Executors.newFixedThreadPool(10); for (int i = 0; i < 1000; i++) { executor.submit(() -> fetchPrice()); } What happens: 10 threads run tasks Remaining tasks wait in queue Execution happens in batches Key Insight Threads are not free resources. They are: Memory-intensive CPU-scheduled Expensive to manage ExecutorService solves this by: Limiting threads Reusing them Managing tasks efficiently Conceptual Shift From: “How many threads can I create?” To: “How can I manage concurrency efficiently?” This understanding is fundamental for building scalable backend systems in Java. #Java #Multithreading #ExecutorService #Concurrency #BackendDevelopment #SystemDesign
To view or add a comment, sign in
-
🚀 100 Days Java + DSA Challenge | Day 3 Today I worked on Loops and Functions/Methods in Java. I know these are basic problems, but solving them gave me real hands-on experience with: ✔ How loops actually work step by step ✔ How to design and call methods ✔ Breaking problems into smaller reusable parts ✔ Writing cleaner and more structured code 💻 Problems I solved today: • Print numbers (1 to N, M to N) • Reverse printing • Sum & product of ranges • Factorial • Finding factors and counting factors 📌 Here’s the code I practiced 👇 import java.util.Scanner; public class Day3 { public static void printNumbers(int n) { for(int i = 1; i <= n; i++) { System.out.print(i + " "); } System.out.println(); } public static void printNumRange(int m, int n) { for(int i = m; i <= n; i++) { System.out.print(i + " "); } System.out.println(); } public static void printReverseFromNto1(int n) { for(int i = n; i > 0; i--) { System.out.print(i + " "); } System.out.println(); } public static void printReverseFromNtoM(int m, int n) { for(int i = n; i >= m; i--) { System.out.print(i + " "); } System.out.println(); } public static void sumOfNaturalNumbersFrom1toN(int n) { int sum = 0; for(int i = 1; i <= n; i++) { sum += i; } System.out.println(sum); } public static void factorialOfNumber(int n) { int fact = 1; for(int i = 1; i <= n; i++) { fact *= i; } System.out.println(fact); } public static void sumOfMtoN(int m, int n) { int sum = 0; for(int i = m; i <= n; i++) { sum += i; } System.out.println(sum); } public static void productOfMtoN(int m, int n) { int product = 1; for(int i = m; i <= n; i++) { product *= i; } System.out.println(product); } public static void printFactorsOfNumber(int n) { for(int i = 1; i <= n; i++) { if(n % i == 0) { System.out.print(i + " "); } } } public static void countOfFactors(int n) { int count = 0; for(int i = 1; i <= n; i++) { if(n % i == 0) { count++; } } System.out.println(count); } } These may look simple, but practicing them helped me think in terms of logic, loops, and functions, which is the foundation for solving bigger DSA problems. Day by day, I’m building my problem-solving muscle 💪 #100DaysOfCode #Java #DSA #CodingJourney #Programming #SoftwareDevelopment
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development