Day 18 —#100DaysJava today Java connected to a database for the first time. That felt real. ☕ For 17 days I have been writing Java code that runs and stops. Data lives for one second and disappears. Today I learned JDBC — and now data can actually be saved, fetched, updated, and deleted. Permanently. This is where Java starts feeling like a real backend language. --- What is JDBC? JDBC stands for Java Database Connectivity. It is the bridge between your Java code and a database like MySQL or PostgreSQL. Without JDBC — Java cannot talk to a database. With JDBC — Java can run SQL queries directly from your code. --- How it works — 5 simple steps every Java developer should know: Step 1 — Load the driver Tell Java which database you are connecting to. Class.forName("com.mysql.cj.jdbc.Driver"); Step 2 — Create a connection Give the database URL, username, and password. Connection con = DriverManager.getConnection(url, user, password); Step 3 — Create a statement Prepare the SQL query you want to run. Statement st = con.createStatement(); Step 4 — Execute the query Run SELECT, INSERT, UPDATE, or DELETE. ResultSet rs = st.executeQuery("SELECT * FROM users"); Step 5 — Close the connection Always close after use. Never leave it open. con.close(); --- The thing that surprised me — PreparedStatement vs Statement. Statement is simple but dangerous. If you put user input directly into a SQL query, a hacker can inject malicious SQL and destroy your database. This is called SQL Injection. PreparedStatement is safe. You use placeholders — ? — and Java handles the input safely. Every real application uses PreparedStatement. Never Statement with user input. PreparedStatement ps = con.prepareStatement("SELECT * FROM users WHERE id = ?"); ps.setInt(1, userId); --- Also learned today — CRUD operations: CREATE → INSERT INTO READ → SELECT UPDATE → UPDATE SET DELETE → DELETE FROM These four operations are the foundation of every backend application ever built. --- What clicked today — every app I have ever used stores data somewhere. Instagram saves your photos. Zomato saves your orders. Swiggy saves your address. JDBC is the layer that makes that possible in Java. 17 days in. The journey is getting more real every single day. 💪 Day 1 ................................................... Day 18 To any backend developer reading this — what was your first database connection moment like? Did it feel as satisfying as it did for me today? 🙏 #Java #JDBC #Database #MySQL #BackendDevelopment #100DaysOfJava #JavaDeveloper #LearningInPublic #100DaysOfCode #SQL #WebDevelopment #Programming
Connecting Java to a Database with JDBC
More Relevant Posts
-
Day 19 —#100DaysJava today I built my first real Java project. Not a tutorial. Not a copy-paste. A working backend system I built myself. ☕ It is a Login and Registration System using JDBC, DAO pattern, and MySQL. Users can register. Users can login. Data is stored in a real database. That is a real backend application. --- Here is everything I used to build it — and what each piece does: JDBC — the bridge between Java and MySQL. Without this, Java cannot talk to a database at all. PreparedStatement — the safe way to run SQL queries. Prevents SQL Injection attacks. Every real company uses this. Never use plain Statement with user input. DAO Pattern — stands for Data Access Object. This separates your database logic from your business logic. Your main code does not need to know HOW data is saved — just that it is saved. Clean, organized, professional. Transactions — if two database operations need to happen together, transactions make sure either BOTH succeed or NEITHER happens. This is how bank transfers work. Either money leaves account A AND enters account B — or nothing happens at all. Batch Processing — instead of running 100 INSERT queries one by one, batch them and run all 100 in one go. Much faster. This matters in production systems handling real traffic. Connection Pooling — instead of creating a new database connection for every request, reuse existing connections. HikariCP is the industry standard for this. Every Spring Boot application uses it under the hood. --- Project structure I followed: model — User.java (the data object) dao — UserDAO interface + UserDAOImpl (database logic) util — DBConnection (reusable connection) Main — runs the program This is the same structure used in real enterprise Java projects. --- What I learned beyond the code: Storing plain passwords is dangerous. Never do it. BCrypt hashing is the industry standard — that is my next step. Always close your database connection. Use try-with-resources so it closes automatically even if something crashes. 100% coverage does not mean bug-free. Testing edge cases — null email, wrong password, duplicate registration — is what separates good developers from average ones. --- 19 days ago I did not know what a variable was in Java. Today I built a backend system with a real database, real security concepts, and real architecture patterns. The only thing that got me here — showing up every single day. Day 1 .......................... Day 19 To any developer reading this — what was the first real project you built? Drop it below. I would love to know. 🙏 #Java #JDBC #MySQL #DAOPattern #BackendDevelopment #100DaysOfJava #JavaDeveloper #LearningInPublic #100DaysOfCode #Database #CleanCode #SoftwareEngineering #ProjectBuilding
To view or add a comment, sign in
-
🛑 #Stop blindly using ArrayList<T>() and understand why ConcurrentModificationException is your friend. As Java developers, we use the Collection Framework daily. But we rarely stop to consider how it actually works under the hood—and that affects performance. Choosing the right structure—like ArrayList versus LinkedList—impacts your application’s speed and memory usage. This diagram visualizes how Java manages that data internally. Let’s break it down using real code: 1. ArrayList and the Cost of Dynamic Resizing ArrayList is excellent for random access, but it has to manage an underlying array. When it reaches capacity, Java must create a new, larger array and copy all the data over—an O(n) operation. The diagram shows: ArrayList -> Check Capacity -> Dynamic Resize -> MEMORY (Heap) How it looks in Java: import java.util.ArrayList; import java.lang.reflect.Field; public class ArrayListResizingDemo { public static void main(String[] args) throws Exception { // We initialize with a specific size. ArrayList<String> list = new ArrayList<>(5); System.out.println("1. New ArrayList created with capacity 5."); checkInternalCapacity(list); // Fill it up. The internal array size (5) matches the element count (5). System.out.println("\n2. Filling up capacity..."); for (int i = 0; i < 5; i++) { list.add("Element " + (i + 1)); } checkInternalCapacity(list); // The next addition triggers "Dynamic Resize." System.out.println("\n3. Adding the 6th element (triggers dynamic resize)..."); list.add("Element 6"); // The underlying array has now grown (~50%). checkInternalCapacity(list); } /** Helper function (uses Reflection, not for production!). */ private static void checkInternalCapacity(ArrayList<?> list) throws Exception { Field dataField = ArrayList.class.getDeclaredField("elementData"); dataField.setAccessible(true); Object[] internalArray = (Object[]) dataField.get(list); System.out.println(" --> Current internal array size: " + internalArray.length); System.out.println(" --> Number of actual elements stored: " + list.size()); } } #java #springboot
To view or add a comment, sign in
-
-
Today’s session was a deep dive into the Java Collections Framework, with a strong focus on the evolution from traditional Arrays to the more flexible and powerful ArrayList. Below is a structured summary of the key concepts explored: 🔹 Limitations of Arrays: 1)Fixed Size 2)Arrays have a predefined capacity, making them unsuitable for scenarios involving dynamic or growing datasets. 3)Homogeneous Data Storage 4)Arrays typically store elements of a single data type, limiting flexibility when managing diverse data. 5)Contiguous Memory Requirement 6)Arrays require a continuous block of memory. For large datasets (e.g., 1 crore elements), this can lead to memory allocation issues or system performance degradation. )Performance Bottlenecks: Operations like duplicate detection using nested loops result in O(n²) time complexity, which does not scale well for large inputs. 🔹 Java Collections Framework Overview: -Introduction: Launched in 1997 with JDK 1.2 to provide efficient, reusable data structures. -Architects: Designed primarily by Joshua Bloch, with contributions from Neil Gafter. -Purpose: Offers a standardized set of interfaces and classes to store, manipulate, and process data without reinventing core logic. -Evolution: Java transitioned from Sun Microsystems to Oracle starting with JDK 7, which now maintains the platform. 🔹 ArrayList: Internal Working & Behavior: -Underlying Structure: A dynamically resizable array. -Default Initial Capacity: 10 elements. -Resizing Formula: -New Capacity = (Current Capacity × 1.5) + 1 -Resizing Cost: A costly operation involving memory reallocation and copying elements to a new array. Key Characteristics: -Heterogeneous Storage: Can store different types of objects. -Insertion Order Preserved -Allows Duplicates and Null Values -Object-Only Storage: Primitive types are automatically converted to wrapper objects via autoboxing. 🔹 Technical Hierarchy & Usage: Class Hierarchy: ArrayList → AbstractList → List → SequencedCollection → Collection → Iterable Element Access: -Use size() instead of .length -Use get(index) instead of [] Traversal Techniques: -Traditional for loop: Ideal for index-based access (e.g., reverse iteration) -Enhanced for-each loop: Clean and efficient for sequential traversal. Example: ArrayList<Integer> numbers = new ArrayList<>(); numbers.add(10); numbers.add(20); numbers.add(30); for (Integer num : numbers) { System.out.println(num); } -Iterator: Cursor-based traversal inherited from Iterable. Mastering these fundamentals is a crucial step toward building high-performance Java applications and excelling in technical interviews 🚀💻. #JAVA #PROGRAMMIG #TapAcademy #HarshithT
To view or add a comment, sign in
-
If you are preparing for a Java Backend interview, there is one question you can almost guarantee will come up: "Can you explain ACID properties?" It sounds like a database-only topic, but as Java developers, we manage these every day through Spring, JPA, and Hibernate. ACID properties refer to a set of four fundamental principles Atomicity, Consistency, Isolation, and Durability that ensure database transactions are processed reliably and maintain data integrity. While these are primarily database concepts, they are managed in Java through technologies like JDBC, JPA/Hibernate, and Spring Framework. The 4 ACID Properties A - Atomicity ("All or Nothing"): Ensures that a transaction is treated as a single, indivisible unit. All operations within it must succeed for the transaction to be committed; if any part fails, the entire transaction is rolled back, leaving the database unchanged. E.g Imagine a bank transfer. You debit Account A, but the system crashes before crediting Account B. Atomicity ensures that if one part fails, the whole thing rolls back. Java Management: Handled via Connection.commit() and Connection.rollback() in JDBC, or by using the @Transactional annotation in Spring. C - Consistency: Guarantees that a transaction moves the database from one valid state to another, following all predefined rules and constraints (like foreign keys or unique values). Java Management: Maintained through proper application logic, validation rules, and schema constraints enforced by ORM frameworks like Hibernate. I - Isolation Ensures that concurrently executing transactions do not interfere with each other. Intermediate changes made by one transaction are invisible to others until it is fully committed. E.g When 1,000 users hit your app at once, Isolation ensures their transactions don't "leak" into each other. One user shouldn't see another's half-finished data. Java Management: Managed by setting Isolation Levels (e.g., READ_COMMITTED, SERIALIZABLE) in JDBC or Spring's @Transactional(isolation = ...). D - Durability Guarantees that once a transaction is committed, its changes are permanent and will survive system failures or power outages. E.g Once a transaction is committed, it’s permanent. Even if the server loses power 1 second later, the data is safe. Java Management: Primarily handled by the database engine (e.g., PostgreSQL, MySQL) using transaction logs and journaling, but Java ensures this by confirming a successful commit(). #JavaLearning #KnowledgeTransfer #interviewPreperation
To view or add a comment, sign in
-
-
🚀 Day 3 of My Advanced Java Journey – Mastering CRUD Operations in JDBC Today, I implemented one of the most important concepts in backend development — CRUD operations using JDBC. 🔹 What is CRUD? CRUD stands for: Create → Insert data Read → Fetch data Update → Modify existing data Delete → Remove data 🔹 1. Create (INSERT) Used to add records into the database. ✔️ Key concept: Using PreparedStatement for inserting values safely. String sql = "INSERT INTO employees(name, designation, salary) VALUES (?, ?, ?)"; PreparedStatement ps = conn.prepareStatement(sql); ps.setString(1, "Vamsi"); ps.setString(2, "Software Engineer"); ps.setDouble(3, 60000); ps.executeUpdate(); 🔹 2. Read (SELECT) Used to retrieve and display data. ✔️ Key concept: Using ResultSet to iterate through records. Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("SELECT * FROM employees"); while(rs.next()){ int id = rs.getInt("id"); String name = rs.getString("name"); String designation = rs.getString("designation"); double salary = rs.getDouble("salary"); } 🔹 3. Update (UPDATE) Used to modify existing records. String sql = "UPDATE employees SET salary = ? WHERE id = ?"; PreparedStatement ps = conn.prepareStatement(sql); ps.setDouble(1, 65000); ps.setInt(2, 1); ps.executeUpdate(); 🔹 4. Delete (DELETE) Used to remove records from the database. String sql = "DELETE FROM employees WHERE id = ?"; PreparedStatement ps = conn.prepareStatement(sql); ps.setInt(1, 1); ps.executeUpdate(); 🔍 What I explored beyond the session Why PreparedStatement is preferred over Statement (prevents SQL Injection 🔐) Difference between executeQuery() and executeUpdate() Importance of handling exceptions (SQLException) Closing resources (Connection, Statement, ResultSet) to avoid memory leaks 💡 CRUD operations form the core of any real-world application, from simple apps to enterprise systems. 🙌 Special thanks to the amazing trainers at TAP Academy: kshitij kenganavar Sharath R MD SADIQUE Bibek Singh Hemanth Reddy Vamsi yadav Harshit T Ravi Magadum Somanna M G Rohit Ravinder TAP Academy 📌 Learning in public. Building consistency every day. #Java #AdvancedJava #JDBC #BackendDevelopment #LearningInPublic #VamsiLearns
To view or add a comment, sign in
-
-
Really interesting perspective on the trade-offs between Spring Data JPA and JDBC. In my experience working with backend systems, this balance between abstraction and control is always a challenge. While ORMs like JPA make development faster, they can sometimes hide what’s really happening at the SQL level, especially when dealing with performance issues. Tools like jOOQ are interesting because they give you more control over queries while still keeping the benefits of type safety and integration with Java. I think the key is choosing the right tool based on the problem—especially when performance and complex queries are involved. Curious to hear—have you used jOOQ in production, and how was your experience?
💡 Java Object-Oriented Querying. 👉 In terms of database access, the Java community is clearly divided into two camps: some like Spring Data JPA for its simplicity and low entry threshold. In contrast, others prefer Spring JDBC for its accuracy and query-tuning capabilities. 👉 Both Spring Data JPA and Spring Data JDBC, with their obvious advantages, have disadvantages that make development on them not very suitable for production. These solutions are two extremes, and we need a golden mean. 🔥 You may ask: What are the alternatives? And I will answer: Java Object-Oriented Querying. ⚠️ jOOQ (Java Object-Oriented Querying) is a Java library that allows you to write SQL queries directly in your code using a typed DSL (Domain-Specific Language) generated based on the database schema. When used with Spring Boot, jOOQ provides strong typing, query safety, and convenient database operations, making it an excellent alternative to ORMs like Hibernate for complex queries. 🔥 Unlike Spring Data JPA, jOOQ has no side effects. There is no N+1 problem (unless, of course, you create one yourself with suboptimal queries). All queries will be executed exactly as you define them and in exactly the number you specify. ➕ Benefits: ▪️ Code generation: jOOQ scans your database and generates Java classes corresponding to the tables and views. ▪️ Type safety: If you change a column name in the database, the project will not compile until you update the queries, which prevents runtime errors. ▪️ SQL-oriented: Unlike Hibernate (which hides SQL), jOOQ allows you to write full-fledged, complex SQL queries (JOINs, subqueries, window functions) in Java, while retaining control over what happens. ▪️ Integration with Spring: Spring Boot automatically configures jOOQ components, supporting transactions and mapping results to POJOs (Plain Old Java Objects). 🔥 DSL frameworks solve the problem of “translation” between Java and SQL, allowing you to write a database query in a Java-based architecture in such a way that it exactly matches the expected SQL query. ‼️ Examples: Result<Record> result = create..select() .from(AUTHOR) .where(AUTHOR..ID..gt(5)) .orderBy(AUTHOR..FIRST_NAME..asc()) .fetch(); for (Record r: result) { Integer id = r.getValue(AUTHOR..ID); String firstName = r.getValue(AUTHOR.FIRST_NAME); logger.log(Level..INFO, "ID: {}, Name: {}", id, firstName); } 📌 jOOQ is the perfect choice if you need full control over SQL but want to avoid manual JDBC data mapping and typos in SQL queries. #programmingtips #softwaredevelopment #data #spring
To view or add a comment, sign in
-
-
🔹 What is Idempotency? An operation is idempotent if calling it multiple times gives the same result as calling it once. Example: First request → Order created ✅ Duplicate request → No new order ❌ (returns same response) 🔹 Common Approach in Spring Boot ✅ Use Idempotency Key (Best Practice) Client sends a unique key in header: Idempotency-Key: abc123 Server stores and checks it. 🔹 Example Implementation (Spring Boot) 1. Entity to store request Java @Entity public class IdempotencyRecord { @Id private String idempotencyKey; private String response; private int statusCode; // getters & setters } 2. Repository Java public interface IdempotencyRepository extends JpaRepository<IdempotencyRecord, String> { } 3. Service Logic Java @Service public class PaymentService { @Autowired private IdempotencyRepository repository; public ResponseEntity<String> processPayment(String key, String request) { Optional<IdempotencyRecord> existing = repository.findById(key); // ✅ If already processed, return stored response if (existing.isPresent()) { IdempotencyRecord record = existing.get(); return ResponseEntity.status(record.getStatusCode()) .body(record.getResponse()); } // ✅ Process actual logic String response = "Payment Successful for request: " + request; // Save result IdempotencyRecord record = new IdempotencyRecord(); record.setIdempotencyKey(key); record.setResponse(response); record.setStatusCode(200); repository.save(record); return ResponseEntity.ok(response); } } 4. Controller Java @RestController @RequestMapping("/payments") public class PaymentController { @Autowired private PaymentService service; @PostMapping public ResponseEntity<String> makePayment( @RequestHeader("Idempotency-Key") String key, @RequestBody String request) { return service.processPayment(key, request); } } 🔹 How It Prevents Duplicate Scenario Result First request with key abc123 Process & save Second request with same key Return saved response (no duplicate) 🔹 Important Points ✔ Use unique key per request (UUID recommended) ✔ Store response + status ✔ Add expiry (TTL) for old keys ✔ Use database unique constraint to avoid race conditions ✔ Combine with @Transactional for safety 🔹 Advanced (Production Ready) Use Redis for faster lookup Add locking to avoid parallel duplicate execution Store full request hash for validation
To view or add a comment, sign in
-
-
Wanted to share a few learnings from a recent backend upgrade we did. ⚙️ We moved one of our services to a newer Java, Spring, and Hibernate stack, and one thing became very clear: Big upgrades usually don’t fail at the compiler boundary. They fail at the assumption boundary. For us, the real work was not just the JDK upgrade. It was aligning everything around it: persistence mappings date/time behavior JSON serialization One of the earliest signals came from date/time handling. Before moving to java.time, we saw that some flows still using legacy java.sql.* types were not behaving correctly after the upgrade. In multiple cases, timestamp values were ending up in the database as 00:00:00.000 instead of carrying the expected time component. That created major issues across dev env because downstream flows were now reading and acting on incomplete or incorrect timestamps. That was the point where it became clear: this was not just a framework upgrade problem. It was a type-system and consistency problem. So we moved legacy JDBC-era date/time handling toward java.time, especially LocalDateTime. Java 21 does not suddenly reject java.sql.Date or java.sql.Timestamp, but modern frameworks definitely favor java.time. And with Hibernate 6.6.x, that path is much cleaner and more predictable. Why that mattered: older java.sql.* types were still “working” in parts of the system, but they became a poor fit across the full stack, especially once we looked at: ORM mappings Redis serialization integration flows consistency of behavior across layers The second issue was quieter, but just as important. 🧩 Once LocalDateTime entered one of our Redis-backed JSON flows, serialization started failing with: InvalidDefinitionException: Java 8 date/time type `java.time.LocalDateTime` not supported by default The root cause was a custom ObjectMapper. The fix: OBJECT_MAPPER.registerModule(new JavaTimeModule()); OBJECT_MAPPER.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS); Why this mattered: JavaTimeModule teaches Jackson how to handle java.time types disabling WRITE_DATES_AS_TIMESTAMPS makes unannotated date/time fields serialize as readable strings by default instead of timestamp/array-style output In short: the upgrade exposed hidden consistency gaps. The API layer may be fine, but caches, async flows, and internal serializers can still be operating under very different assumptions. A few takeaways from this upgrade 👇 JDK upgrades are often the easy part date/time migration deserves first-class attention Hibernate 6.6.x is far happier when your model uses java.time custom ObjectMappers are hidden compatibility surfaces consistency across DB, API, cache, and integrations matters more than “it compiles” #Java21 #SpringBoot #Hibernate #BackendEngineering #SoftwareArchitecture #TechLearning
To view or add a comment, sign in
-
#Day73 🚀 || AI Powered Java Full Stack Course 💻 Advanced Java | 🗄️ Hibernate Mapping, Object States & Dirty Checking (Day 3 of Hibernate) with Frontlines EduTech (FLM) Hello connections 👋😊 As part of my AI Powered Java Full Stack Course, today marks Day 3 of my Hibernate learning journey, where I explored advanced concepts like mapping annotations, primary key strategies, object lifecycle states, and dirty checking 💡🚀 🔹 🧩 Hibernate Mapping Annotations Today, I explored how Hibernate maps Java classes and variables to database tables using annotations 🔗 ✨ Key annotations: • @Table ➝ Maps a Java class to a specific database table 🗄️ • @Column ➝ Maps Java variables to table columns • @GeneratedValue ➝ Automatically generates primary key values and avoids issues like null/zero IDs during fetching 🔹 🔑 Primary Key Generation Strategies Hibernate provides multiple strategies to generate primary keys efficiently ⚙️ ✨ Strategies learned: • IDENTITY ➝ Uses database auto-increment feature • SEQUENCE ➝ Uses database sequence object • UUID HEX GENERATOR ➝ Generates unique 32-character hexadecimal IDs 🔐 • AUTO ➝ Lets Hibernate decide the best strategy based on database 🔹 ⚙️ Hibernate Configuration (hbm2ddl.auto) This property controls how Hibernate manages the database schema 📊 ✨ Options: • create ➝ Drops existing tables and recreates them each time • update ➝ Updates schema without deleting existing data • none ➝ No schema changes (default behavior) 🔹 🔄 Hibernate Object States Understanding object lifecycle is very important in Hibernate 🔍 ✨ States: • Transient ➝ Object created but not yet linked to database • Persistent ➝ Object associated with session and tracked by Hibernate • Detached ➝ Object exists but session is closed 🔹 🔍 Dirty Checking & Updates One of the most powerful features I learned today 🚀 ✨ • Hibernate automatically detects changes in persistent objects • No need to write explicit update queries • Updating values using setter methods reflects in database automatically • For detached objects, session.merge() is the standard approach 💡 Today’s Takeaway: Hibernate simplifies not only database interaction but also manages object states and updates efficiently using features like dirty checking — making development faster, cleaner, and more maintainable 💯🚀 🙏 Special thanks to Krishna Mantravadi, Upendra Gulipilli, and my trainer Fayaz S for their continuous guidance in my Hibernate learning journey 💡 #Hibernate #Java #AdvancedJava #ORM #JavaFullStack #BackendDevelopment #LearningJourney #AIPoweredJavaFullStack #FrontlinesEdutech #Frontlinesmedia #FLM 🚀
To view or add a comment, sign in
-
🚀 Java 8 Map Methods You Must Know (With Examples) As a Java developer, we often use Map, but Java 8 introduced some powerful methods that simplify complex logic. Let’s break down the most important ones 👇 import java.util.HashMap; import java.util.Map; public class MapMethodsDemo { public static void main(String[] args) { Map<String, Integer> map = new HashMap<>(); // Initial Data map.put("apple", 2); map.put("orange", 3); System.out.println("Initial Map: " + map); // 1. compute() map.compute("apple", (k, v) -> v == null ? 1 : v + 1); // key exists map.compute("banana", (k, v) -> v == null ? 1 : v + 1); // key does not exist System.out.println("After compute(): " + map); // 2. computeIfAbsent() map.computeIfAbsent("grapes", k -> 10); // added map.computeIfAbsent("apple", k -> 100); // ignored (already exists) System.out.println("After computeIfAbsent(): " + map); // 3. computeIfPresent() map.computeIfPresent("orange", (k, v) -> v * 2); // updated map.computeIfPresent("mango", (k, v) -> v * 2); // ignored (not present) System.out.println("After computeIfPresent(): " + map); // 4. merge() map.merge("apple", 5, (oldVal, newVal) -> oldVal + newVal); // adds map.merge("pineapple", 7, Integer::sum); // inserts (not present) System.out.println("After merge(): " + map); } } Output: Initial Map: {orange=3, apple=2} After compute(): {banana=1, orange=3, apple=3} After computeIfAbsent(): {banana=1, orange=3, apple=3, grapes=10} After computeIfPresent(): {banana=1, orange=6, apple=3, grapes=10} After merge(): {banana=1, orange=6, apple=8, pineapple=7, grapes=10} 🧠 Explanation 🔹 1. compute() 👉 Works for both existing and non-existing keys map.compute("apple", (k, v) -> v == null ? 1 : v + 1); If key exists → updates value If key doesn’t exist → creates new value v == null → means key is not present 💡 Use when you want full control over logic 🔹 2. computeIfAbsent() 👉 Runs only if key is NOT present map.computeIfAbsent("grapes", k -> 10); If key exists → does nothing If key missing → inserts value 💡 Best for: Initializing values Avoiding overwriting existing data 🔹 3. computeIfPresent() 👉 Runs only if key is present map.computeIfPresent("orange", (k, v) -> v * 2); If key exists → updates If not → ignored 💡 Use when: You only want to modify existing data 🔹 4. merge() 👉 Combines existing value + new value map.merge("apple", 5, (oldVal, newVal) -> oldVal + newVal); If key exists → applies merge logic If not → inserts directly 💡 Best for: Counting Summation Aggregation #Java #Java8 #Coding #Developers #Programming
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development