🚀 Bellman-Ford Algorithm in Java – From Natural Graph to Edge-Based Design I recently revisited the Bellman-Ford shortest path algorithm and refined my implementation in 3 progressive stages. Each stage improved my understanding of how graph representation affects algorithm clarity. 🧩 1. Natural Version – Adjacency List Based Bellman-Ford My first implementation used a standard adjacency list graph: Graph stored as List<List<Pair>> Relaxation done through: node → neighbors traversal Edges were implicitly handled inside adjacency structure ✔ Works correctly ✔ Natural way to represent graphs ⚠ But Bellman-Ford logic felt “hidden” inside nested loops ⚡ 2. Refactored Version – Converting Adjacency List to Edge List Next, I introduced an explicit Edge structure: Created Edge(src, dest, weight) class Converted adjacency list → edge list using a helper method Then applied Bellman-Ford over the generated edges Now the algorithm became clearer: 👉 “Relax all edges V-1 times” ✔ Much more readable ✔ Matches algorithm definition better ✔ Removes nested traversal during relaxation 🔥 3. Final Version – Pure Edge List Bellman-Ford Finally, I implemented a clean version where the graph is directly represented as an edge list: List<Edge> is the primary structure No adjacency conversion needed Direct edge relaxation in each iteration Includes negative cycle detection ✔ Simplest and most intuitive version ✔ Closest to textbook Bellman-Ford ✔ Best for interviews and conceptual clarity 🧠 Key Insight This evolution taught me: Bellman-Ford is fundamentally an edge-driven algorithm, not a node traversal algorithm. Adjacency list → good for general graph problems Edge list → natural fit for relaxation-based algorithms 🔥 Final Takeaway All three versions are correct, but they differ in clarity: ✔ Adjacency list → natural but indirect ✔ Conversion approach → helps transition understanding ✔ Edge list → cleanest and most algorithm-aligned Sometimes improving code is not about changing logic — it’s about aligning structure with the nature of the algorithm. Next step: Floyd-Warshall and shortest path reconstruction. #Java #DSA #Graphs #BellmanFord #Algorithms #ProblemSolving #CodingJourney #BackendDevelopment
Bellman-Ford Algorithm in Java - Edge-Based Design
More Relevant Posts
-
LeetCode Practice - 31. Next Permutation The problem is solved using JAVA Roman numerals are formed by appending the conversions of decimal place values from highest to lowest. Converting a decimal place value into a Roman numeral has the following rules: If the value does not start with 4 or 9, select the symbol of the maximal value that can be subtracted from the input, append that symbol to the result, subtract its value, and convert the remainder to a Roman numeral. If the value starts with 4 or 9 use the subtractive form representing one symbol subtracted from the following symbol, for example, 4 is 1 (I) less than 5 (V): IV and 9 is 1 (I) less than 10 (X): IX. Only the following subtractive forms are used: 4 (IV), 9 (IX), 40 (XL), 90 (XC), 400 (CD) and 900 (CM). Only powers of 10 (I, X, C, M) can be appended consecutively at most 3 times to represent multiples of 10. You cannot append 5 (V), 50 (L), or 500 (D) multiple times. If you need to append a symbol 4 times use the subtractive form. Given an integer, convert it to a Roman numeral. Example 1: Input: num = 3749 Output: "MMMDCCXLIX" Explanation: 3000 = MMM as 1000 (M) + 1000 (M) + 1000 (M) 700 = DCC as 500 (D) + 100 (C) + 100 (C) 40 = XL as 10 (X) less of 50 (L) 9 = IX as 1 (I) less of 10 (X) Note: 49 is not 1 (I) less of 50 (L) because the conversion is based on decimal places Example 2: Input: num = 58 Output: "LVIII" Explanation: 50 = L 8 = VIII Example 3: Input: num = 1994 Output: "MCMXCIV" Explanation: 1000 = M 900 = CM 90 = XC 4 = IV Constraints: 1 <= num <= 3999 #LeetCode #Java #CodingPractice #ProblemSolving #DSA #Array #DeveloperJourney #TechLearning
To view or add a comment, sign in
-
-
🌳 TreeSet in Java — simple, but powerful. Most developers use List or HashSet… but ignore TreeSet, even when it’s the perfect choice. 🔹 What is TreeSet? A sorted Set in Java 👉 Stores unique elements 👉 Maintains elements in sorted order 🔹 Why is it important? Because sometimes you don’t just need data… 👉 You need it ordered automatically No extra sorting step required. 🔹 When should you use TreeSet? ✔ When you need sorted data ✔ When you want no duplicates ✔ When frequent range queries are needed 🔹 Example 👇 Set<Integer> set = new TreeSet<>(); set.add(5); set.add(1); set.add(3); System.out.println(set); 👉 Output: [1, 3, 5] 🔹 Behind the scenes 👀 TreeSet uses a Red-Black Tree 👉 Operations (add, remove, search) → O(log n) 🔹 Interesting fact 💡 👉 TreeSet doesn’t use equals() for uniqueness It relies on compareTo() / Comparator That means: If comparison says two elements are equal → one will be ignored 💡 Real-world example: Think of a leaderboard 🏆 👉 Always sorted 👉 No duplicates 👉 Easy to get top/bottom performers 👉 HashSet = Fast 👉 TreeSet = Sorted Choose based on your need. Want to go deeper into Java & System Design? 👉 https://lnkd.in/gjQhR3_Y Follow for more on AI, Java & System Design 🚀 #Java #TreeSet #JavaDeveloper #Collections #BackendDevelopment #SoftwareEngineering #Developers #Tech #Learning
To view or add a comment, sign in
-
-
🚀 DSA Journey — LeetCode Practice 📌 Problem: Construct Binary Tree from Preorder and Inorder Traversal 💻 Language: Java 🔹 Approach: To solve this problem, I used Depth-First Search (DFS) with Recursion to construct the binary tree. • The first element of preorder is always the root • Find the root index in inorder to divide left and right subtrees • Recursively build the left subtree first • Then recursively build the right subtree • Maintain a global index (preIndx) to track preorder traversal 📌 Traversal Logic: Preorder → Root → Left → Right Inorder → Left → Root → Right ⚠️ Challenge Faced: Initially, I passed preIndx as a parameter, which caused incorrect results due to Java’s pass-by-value behavior. ✅ Solution: Used a global variable for preIndx so it remains consistent across all recursive calls. ⏱ Time Complexity: O(n²) (using linear search) 🧩 Space Complexity: O(n) (recursion stack) 💡 Optimization: Using a HashMap to store inorder indices can reduce time complexity to O(n). 📖 Key Learning: This problem strengthened my understanding of recursion, tree construction, and how traversal orders help rebuild tree structures. It also highlighted the importance of handling shared state correctly in recursive solutions. #DSA #Java #LeetCode #BinaryTree #Recursion #DFS #CodingJourney #ProblemSolving #LearningInPublic #TreeTraversal
To view or add a comment, sign in
-
-
When I recently posted about the speed-up in Mark Burgoyne's pyResToolbox using Rust, there was some push back concerning the Rust benchmarks as I was showing that my Java implementation was faster. https://lnkd.in/dgk-WyZA The general comment was that Rust should be faster than Java. I'd agree. Rust is compiled ahead of time and can optimise the compiled code for the target CPU. Java is compiled using a [just-in-time compiler](https://lnkd.in/deEMcCnm) from bytecode that is generated to be run on any CPU. Since my previous intent was just to show Java versus Python, or Java versus Python with Rust acceleration in a casual way, I decided to undertake a more comprehensive comparison which would allow fairer comparison. Mark Burgoyne has supplied a [variety of code examples](https://lnkd.in/dm7HyCd9) for the BNS viscosity method, including a pure Rust implementation and VBA for Excel. The results for this more comprehensive comparison are shown in the image along with this post. Note the logarithmic scale for calculations per second! Note the difference between single pressure and multiple pressure, is that multiple pressure does calculations for an array of pressure points on the same composition, so the loop can be optimised. This shows that Rust with parallel threaded execution for the multiple pressure point solutions is by far the fastest implementation, reaching ~ 61 million calculations per second. In comparison VBA for Excel manages a paltry 772 calculations per second. Rust is about 80,000 times faster, or nearly 5 orders of magnitude. As a long time advocate that most speed comes from the algorithm design itself, it is still remarkable to see just what a difference the language and environment makes. Remember, these are benchmarks for an **identical** algorithm. The difference is purely the language implementation. This dramatically illustrates the difference between speed and convenience. Where Excel and Python are (arguably) more accessible, parallel algorithms in Java and Rust deliver better speed. I think the Rust-accelerated Python solution that Mark wrote is a good compromise between the two. As an aside, this is the first Rust code I've ever written / used. I'm pretty impressed.
To view or add a comment, sign in
-
-
Day 3 of Java with DSA Journey 🚀 📌 Topic: Guess Number Higher or Lower (LeetCode 374) 💬 Quote: "Efficiency is not about doing more; it's about eliminating what doesn't matter." ✨ What I Learned: 🔹 Binary Search Beyond Arrays: Binary Search isn’t limited to arrays — it works perfectly on a number range like [1...n]. 🔹 Working with APIs: Learned how to adapt logic based on API responses: -1 → Guess is too high 1 → Guess is too low 0 → Correct answer 🔹 Power of Efficiency: Even for a huge range (up to 2³¹ - 1), Binary Search finds the answer in ~31 steps 🤯 Compared to Linear Search → practically impossible! 🔹 Complexity: ⏱ Time: O(log n) 📦 Space: O(1) 🧠 Problem Solved: ✔️ Guess Number Higher or Lower 💡 Key Insight: This problem highlights the “Narrowing the Search Space” concept. Each step eliminates half the possibilities — that’s the magic of logarithmic algorithms ⚡ ⚡ Interview Insight (3-Way Decision Logic): Unlike boundary problems, here we deal with three outcomes: 1️⃣ 0 → Found the number (return immediately) 2️⃣ -1 → Move right = mid - 1 3️⃣ 1 → Move left = mid + 1 👉 Use while (left <= right) since the target is guaranteed to exist. 🔑 Takeaway: Consistency beats intensity. Showing up daily is what builds mastery. #DSA #LeetCode #Java #CodingJourney #BinarySearch #ProblemSolving #100DaysOfCode #JavaDeveloper #Algorithms
To view or add a comment, sign in
-
-
Day 43-Understanding Runtime Polymorphism in Java If compile-time polymorphism is about decisions made early, runtime polymorphism is where things get interesting — decisions are made while the program is running. What is Runtime Polymorphism? Runtime polymorphism is the ability of a program to decide which method to execute at runtime based on the object. It is achieved using method overriding. 🔹 Simple Idea: Same method name, same parameters… but different behavior depending on the object. 🔹 Example: class Card { void swipe() { System.out.println("Please wait..."); } } class CreditCard extends Card { void swipe() { System.out.println("Payment via Credit Card"); } } class DebitCard extends Card { void swipe() { System.out.println("Payment via Debit Card"); } } public class Main { public static void main(String[] args) { Card c; c = new CreditCard(); c.swipe(); // Credit Card method c = new DebitCard(); c.swipe(); // Debit Card method } } 🔹 What’s happening here? - The reference type is Card - But the object changes (CreditCard / DebitCard) - JVM decides which method to call at runtime 🔹 Key Points: ✔ Happens at runtime (execution time) ✔ Achieved using method overriding ✔ Also called Dynamic Binding / Late Binding ✔ JVM decides method execution based on actual object This is what makes Java powerful and flexible in real-world applications. #Java #OOP #Polymorphism #CodingJourney #ProgrammingBasics
To view or add a comment, sign in
-
-
A lot of people get confused between Shallow Copy vs Deep Copy in Java 🤯 Let me break it down in the simplest way possible 👇 Imagine we create a class (like "Student") with: - a primitive variable (like "roll") - a String (like "name") - and an array (like "marks[]") --- 🔹 Shallow Copy In shallow copy, we are not creating a new object for everything. Instead, we are copying the reference of some elements (especially arrays/objects). 👉 So if you change something like: marks[0] = 100; It will be reflected in both objects ❗ Because both are pointing to the same memory location. 🔹 Deep Copy In deep copy, we create a completely new copy of all elements, including arrays. 👉 So if you change: marks[0] = 100; It will NOT affect the other object ✅ Because both objects now have independent memory. --- 🔥 Key Takeaways - "int" (primitive) → copied by value (safe ✔️) - "String" → immutable, behaves safe ✔️ - "Array/Object" → needs deep copy manually ❗ 💡In one line: Shallow Copy = same reference Deep Copy = new memory, independent copy
To view or add a comment, sign in
-
-
Day 7.2 of Java with DSA Journey 🚀 📌 Topic: Find All Numbers Disappeared in an Array (LeetCode 448) Quote: "Constraints are not limitations—they are invitations to think smarter." ✨ What I learned today: 🔹 In-Place Hashing (Pro Trick): When extra space is restricted, the input array itself can act like a HashMap. 🔹 Index Mapping Insight: Since values are in range [1, n], each number maps to an index: 👉 index = value - 1 🔹 Sign Flipping Technique: Mark elements as “visited” by flipping the value at the mapped index to negative. ✔️ Negative → already seen ✔️ Positive → missing number 🔹 Efficiency: ✔️ Time Complexity: O(n) ✔️ Space Complexity: O(1) (No extra data structures used!) 🧠 Problem Solved: ✔️ Find All Numbers Disappeared in an Array 💡 Key Insight: This problem completely changed how I look at arrays. Instead of using extra memory like HashSet, I learned how to store state inside the array itself. That’s a powerful mindset shift for interviews 🚀 ⚡ Interview Insight (High-Value Pattern): Whenever you see: Numbers in range [1, n] or [0, n] Need to find missing / duplicate elements Constraint of O(1) space 👉 Think Index as Hash Key This is a top-tier pattern asked in product-based companies. Consistency is the real key 🔑 #DSA #LeetCode #Java #CodingJourney #ProblemSolving #Algorithms #Array #InterviewPrep #Optimization #Day8 ##MCA #lnct #100DaysOfCode #SoftwareEngineering #InPlaceAlgorithms #TechLearning #JavaDeveloper
To view or add a comment, sign in
-
-
Tail-call optimization is one of those programming features that once you learn it, you can't believe it's not in every language (it sadly still isn't in Java). If you haven't heard of it, here's the idea. Many algorithms are easily understood in terms of recursion, but everyone knows what the problem with that is. Each recursive call consumes a new stack frame, so if you recurse too deep, you overflow the stack and run out of memory. Tail-call optimization cleverly fixes this. It's a compile-time language feature that transforms recursive functions so that they reuse the current stack frame instead of adding new ones. That means you can recurse to unlimited depths without blowing the stack. Notably, this only works if the recursive call is the last operation in the function. Otherwise it's impossible to reuse the current frame. But that usually isn't a problem. Elixir and OCaml (I've heard) do this for you automatically. Scala and Clojure do it with hints (tailrec and recur, respectively). I'd love it if it came to Java since it's something I've come to rely on. But it's been debated for many years and it hasn't happened yet. Maybe someday. 🤷
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development