Day 19 —#100DaysJava today I built my first real Java project. Not a tutorial. Not a copy-paste. A working backend system I built myself. ☕ It is a Login and Registration System using JDBC, DAO pattern, and MySQL. Users can register. Users can login. Data is stored in a real database. That is a real backend application. --- Here is everything I used to build it — and what each piece does: JDBC — the bridge between Java and MySQL. Without this, Java cannot talk to a database at all. PreparedStatement — the safe way to run SQL queries. Prevents SQL Injection attacks. Every real company uses this. Never use plain Statement with user input. DAO Pattern — stands for Data Access Object. This separates your database logic from your business logic. Your main code does not need to know HOW data is saved — just that it is saved. Clean, organized, professional. Transactions — if two database operations need to happen together, transactions make sure either BOTH succeed or NEITHER happens. This is how bank transfers work. Either money leaves account A AND enters account B — or nothing happens at all. Batch Processing — instead of running 100 INSERT queries one by one, batch them and run all 100 in one go. Much faster. This matters in production systems handling real traffic. Connection Pooling — instead of creating a new database connection for every request, reuse existing connections. HikariCP is the industry standard for this. Every Spring Boot application uses it under the hood. --- Project structure I followed: model — User.java (the data object) dao — UserDAO interface + UserDAOImpl (database logic) util — DBConnection (reusable connection) Main — runs the program This is the same structure used in real enterprise Java projects. --- What I learned beyond the code: Storing plain passwords is dangerous. Never do it. BCrypt hashing is the industry standard — that is my next step. Always close your database connection. Use try-with-resources so it closes automatically even if something crashes. 100% coverage does not mean bug-free. Testing edge cases — null email, wrong password, duplicate registration — is what separates good developers from average ones. --- 19 days ago I did not know what a variable was in Java. Today I built a backend system with a real database, real security concepts, and real architecture patterns. The only thing that got me here — showing up every single day. Day 1 .......................... Day 19 To any developer reading this — what was the first real project you built? Drop it below. I would love to know. 🙏 #Java #JDBC #MySQL #DAOPattern #BackendDevelopment #100DaysOfJava #JavaDeveloper #LearningInPublic #100DaysOfCode #Database #CleanCode #SoftwareEngineering #ProjectBuilding
Building a Real Java Project with JDBC and DAO Pattern
More Relevant Posts
-
Day 18 —#100DaysJava today Java connected to a database for the first time. That felt real. ☕ For 17 days I have been writing Java code that runs and stops. Data lives for one second and disappears. Today I learned JDBC — and now data can actually be saved, fetched, updated, and deleted. Permanently. This is where Java starts feeling like a real backend language. --- What is JDBC? JDBC stands for Java Database Connectivity. It is the bridge between your Java code and a database like MySQL or PostgreSQL. Without JDBC — Java cannot talk to a database. With JDBC — Java can run SQL queries directly from your code. --- How it works — 5 simple steps every Java developer should know: Step 1 — Load the driver Tell Java which database you are connecting to. Class.forName("com.mysql.cj.jdbc.Driver"); Step 2 — Create a connection Give the database URL, username, and password. Connection con = DriverManager.getConnection(url, user, password); Step 3 — Create a statement Prepare the SQL query you want to run. Statement st = con.createStatement(); Step 4 — Execute the query Run SELECT, INSERT, UPDATE, or DELETE. ResultSet rs = st.executeQuery("SELECT * FROM users"); Step 5 — Close the connection Always close after use. Never leave it open. con.close(); --- The thing that surprised me — PreparedStatement vs Statement. Statement is simple but dangerous. If you put user input directly into a SQL query, a hacker can inject malicious SQL and destroy your database. This is called SQL Injection. PreparedStatement is safe. You use placeholders — ? — and Java handles the input safely. Every real application uses PreparedStatement. Never Statement with user input. PreparedStatement ps = con.prepareStatement("SELECT * FROM users WHERE id = ?"); ps.setInt(1, userId); --- Also learned today — CRUD operations: CREATE → INSERT INTO READ → SELECT UPDATE → UPDATE SET DELETE → DELETE FROM These four operations are the foundation of every backend application ever built. --- What clicked today — every app I have ever used stores data somewhere. Instagram saves your photos. Zomato saves your orders. Swiggy saves your address. JDBC is the layer that makes that possible in Java. 17 days in. The journey is getting more real every single day. 💪 Day 1 ................................................... Day 18 To any backend developer reading this — what was your first database connection moment like? Did it feel as satisfying as it did for me today? 🙏 #Java #JDBC #Database #MySQL #BackendDevelopment #100DaysOfJava #JavaDeveloper #LearningInPublic #100DaysOfCode #SQL #WebDevelopment #Programming
To view or add a comment, sign in
-
Today’s session was a deep dive into the Java Collections Framework, with a strong focus on the evolution from traditional Arrays to the more flexible and powerful ArrayList. Below is a structured summary of the key concepts explored: 🔹 Limitations of Arrays: 1)Fixed Size 2)Arrays have a predefined capacity, making them unsuitable for scenarios involving dynamic or growing datasets. 3)Homogeneous Data Storage 4)Arrays typically store elements of a single data type, limiting flexibility when managing diverse data. 5)Contiguous Memory Requirement 6)Arrays require a continuous block of memory. For large datasets (e.g., 1 crore elements), this can lead to memory allocation issues or system performance degradation. )Performance Bottlenecks: Operations like duplicate detection using nested loops result in O(n²) time complexity, which does not scale well for large inputs. 🔹 Java Collections Framework Overview: -Introduction: Launched in 1997 with JDK 1.2 to provide efficient, reusable data structures. -Architects: Designed primarily by Joshua Bloch, with contributions from Neil Gafter. -Purpose: Offers a standardized set of interfaces and classes to store, manipulate, and process data without reinventing core logic. -Evolution: Java transitioned from Sun Microsystems to Oracle starting with JDK 7, which now maintains the platform. 🔹 ArrayList: Internal Working & Behavior: -Underlying Structure: A dynamically resizable array. -Default Initial Capacity: 10 elements. -Resizing Formula: -New Capacity = (Current Capacity × 1.5) + 1 -Resizing Cost: A costly operation involving memory reallocation and copying elements to a new array. Key Characteristics: -Heterogeneous Storage: Can store different types of objects. -Insertion Order Preserved -Allows Duplicates and Null Values -Object-Only Storage: Primitive types are automatically converted to wrapper objects via autoboxing. 🔹 Technical Hierarchy & Usage: Class Hierarchy: ArrayList → AbstractList → List → SequencedCollection → Collection → Iterable Element Access: -Use size() instead of .length -Use get(index) instead of [] Traversal Techniques: -Traditional for loop: Ideal for index-based access (e.g., reverse iteration) -Enhanced for-each loop: Clean and efficient for sequential traversal. Example: ArrayList<Integer> numbers = new ArrayList<>(); numbers.add(10); numbers.add(20); numbers.add(30); for (Integer num : numbers) { System.out.println(num); } -Iterator: Cursor-based traversal inherited from Iterable. Mastering these fundamentals is a crucial step toward building high-performance Java applications and excelling in technical interviews 🚀💻. #JAVA #PROGRAMMIG #TapAcademy #HarshithT
To view or add a comment, sign in
-
🛑 #Stop blindly using ArrayList<T>() and understand why ConcurrentModificationException is your friend. As Java developers, we use the Collection Framework daily. But we rarely stop to consider how it actually works under the hood—and that affects performance. Choosing the right structure—like ArrayList versus LinkedList—impacts your application’s speed and memory usage. This diagram visualizes how Java manages that data internally. Let’s break it down using real code: 1. ArrayList and the Cost of Dynamic Resizing ArrayList is excellent for random access, but it has to manage an underlying array. When it reaches capacity, Java must create a new, larger array and copy all the data over—an O(n) operation. The diagram shows: ArrayList -> Check Capacity -> Dynamic Resize -> MEMORY (Heap) How it looks in Java: import java.util.ArrayList; import java.lang.reflect.Field; public class ArrayListResizingDemo { public static void main(String[] args) throws Exception { // We initialize with a specific size. ArrayList<String> list = new ArrayList<>(5); System.out.println("1. New ArrayList created with capacity 5."); checkInternalCapacity(list); // Fill it up. The internal array size (5) matches the element count (5). System.out.println("\n2. Filling up capacity..."); for (int i = 0; i < 5; i++) { list.add("Element " + (i + 1)); } checkInternalCapacity(list); // The next addition triggers "Dynamic Resize." System.out.println("\n3. Adding the 6th element (triggers dynamic resize)..."); list.add("Element 6"); // The underlying array has now grown (~50%). checkInternalCapacity(list); } /** Helper function (uses Reflection, not for production!). */ private static void checkInternalCapacity(ArrayList<?> list) throws Exception { Field dataField = ArrayList.class.getDeclaredField("elementData"); dataField.setAccessible(true); Object[] internalArray = (Object[]) dataField.get(list); System.out.println(" --> Current internal array size: " + internalArray.length); System.out.println(" --> Number of actual elements stored: " + list.size()); } } #java #springboot
To view or add a comment, sign in
-
-
If you’ve ever wrestled with string-based queries or struggled to keep your database layer type-safe, jOOQ might be the missing piece in your Java stack. N47 Igor Stojanoski #jooq #sql #java 👍 😊 https://lnkd.in/dDYEck3V
To view or add a comment, sign in
-
If you are preparing for a Java Backend interview, there is one question you can almost guarantee will come up: "Can you explain ACID properties?" It sounds like a database-only topic, but as Java developers, we manage these every day through Spring, JPA, and Hibernate. ACID properties refer to a set of four fundamental principles Atomicity, Consistency, Isolation, and Durability that ensure database transactions are processed reliably and maintain data integrity. While these are primarily database concepts, they are managed in Java through technologies like JDBC, JPA/Hibernate, and Spring Framework. The 4 ACID Properties A - Atomicity ("All or Nothing"): Ensures that a transaction is treated as a single, indivisible unit. All operations within it must succeed for the transaction to be committed; if any part fails, the entire transaction is rolled back, leaving the database unchanged. E.g Imagine a bank transfer. You debit Account A, but the system crashes before crediting Account B. Atomicity ensures that if one part fails, the whole thing rolls back. Java Management: Handled via Connection.commit() and Connection.rollback() in JDBC, or by using the @Transactional annotation in Spring. C - Consistency: Guarantees that a transaction moves the database from one valid state to another, following all predefined rules and constraints (like foreign keys or unique values). Java Management: Maintained through proper application logic, validation rules, and schema constraints enforced by ORM frameworks like Hibernate. I - Isolation Ensures that concurrently executing transactions do not interfere with each other. Intermediate changes made by one transaction are invisible to others until it is fully committed. E.g When 1,000 users hit your app at once, Isolation ensures their transactions don't "leak" into each other. One user shouldn't see another's half-finished data. Java Management: Managed by setting Isolation Levels (e.g., READ_COMMITTED, SERIALIZABLE) in JDBC or Spring's @Transactional(isolation = ...). D - Durability Guarantees that once a transaction is committed, its changes are permanent and will survive system failures or power outages. E.g Once a transaction is committed, it’s permanent. Even if the server loses power 1 second later, the data is safe. Java Management: Primarily handled by the database engine (e.g., PostgreSQL, MySQL) using transaction logs and journaling, but Java ensures this by confirming a successful commit(). #JavaLearning #KnowledgeTransfer #interviewPreperation
To view or add a comment, sign in
-
-
🚨 Debugging a tricky @Transactional issue in Spring Boot + Hibernate Recently, I encountered a subtle but impactful issue while working on a backend workflow involving database updates — and it’s something many developers might run into without realizing it. 🔍 The Problem We were intermittently hitting: 👉 StaleObjectStateException At first glance, everything looked correct: • Entities were being re-fetched from the database • Transactions were properly defined • No obvious concurrency issues Yet, the error kept occurring. 🧠 Root Cause (What was actually happening) The issue boiled down to how Hibernate manages its first-level cache (Persistence Context) and how @Transactional(propagation = REQUIRES_NEW) behaves. Here’s a simplified flow: 1️⃣ Outer Transaction • Fetches an entity • Entity is now managed in the Hibernate session (version = 1) 2️⃣ Inner Transaction (REQUIRES_NEW) • Updates the same entity • Database version becomes 2 • Transaction commits successfully 3️⃣ Back to Outer Transaction • Still holds old version (1) in memory • Hibernate is still tracking this entity 4️⃣ A SELECT query is executed • Hibernate triggers auto-flush • Attempts to update the entity using version = 1 • Database has version = 2 💥 Result → StaleObjectStateException ⚠️ Why re-fetching didn’t help? Even after fetching the entity again: • Hibernate returned the same cached instance • Because it already existed in the persistence context 👉 So we were still working with stale data 🛠️ The Fix We explicitly cleared the persistence context to stop Hibernate from tracking the stale entity: entityManager.detach(entity); or entityManager.clear(); This ensured: • The entity is no longer managed • No dirty checking is triggered • No unintended auto-flush occurs 💡 Key Learnings ✔️ Hibernate’s first-level cache can cause unexpected issues in multi-transaction flows ✔️ REQUIRES_NEW creates a separate transaction and persistence context ✔️ Outer transactions do NOT automatically refresh their state ✔️ Even a simple SELECT can trigger an auto-flush 🚫 Common Pitfalls • Assuming re-fetch always hits the database • Ignoring persistence context behavior • Mixing multiple transaction scopes with shared entities ✅ Best Practices • Detach or clear entities when crossing transaction boundaries • Prefer DTOs instead of passing entities between layers • Use refresh() when you need the latest DB state • Be mindful of auto-flush behavior 🎯 One-line takeaway When using REQUIRES_NEW, always consider that your outer transaction may still hold stale data in its persistence context. This experience reinforced an important lesson: 👉 Understanding how Hibernate manages state internally is crucial for building reliable systems. Curious to know — have you faced similar issues in your projects?
To view or add a comment, sign in
-
Wanted to share a few learnings from a recent backend upgrade we did. ⚙️ We moved one of our services to a newer Java, Spring, and Hibernate stack, and one thing became very clear: Big upgrades usually don’t fail at the compiler boundary. They fail at the assumption boundary. For us, the real work was not just the JDK upgrade. It was aligning everything around it: persistence mappings date/time behavior JSON serialization One of the earliest signals came from date/time handling. Before moving to java.time, we saw that some flows still using legacy java.sql.* types were not behaving correctly after the upgrade. In multiple cases, timestamp values were ending up in the database as 00:00:00.000 instead of carrying the expected time component. That created major issues across dev env because downstream flows were now reading and acting on incomplete or incorrect timestamps. That was the point where it became clear: this was not just a framework upgrade problem. It was a type-system and consistency problem. So we moved legacy JDBC-era date/time handling toward java.time, especially LocalDateTime. Java 21 does not suddenly reject java.sql.Date or java.sql.Timestamp, but modern frameworks definitely favor java.time. And with Hibernate 6.6.x, that path is much cleaner and more predictable. Why that mattered: older java.sql.* types were still “working” in parts of the system, but they became a poor fit across the full stack, especially once we looked at: ORM mappings Redis serialization integration flows consistency of behavior across layers The second issue was quieter, but just as important. 🧩 Once LocalDateTime entered one of our Redis-backed JSON flows, serialization started failing with: InvalidDefinitionException: Java 8 date/time type `java.time.LocalDateTime` not supported by default The root cause was a custom ObjectMapper. The fix: OBJECT_MAPPER.registerModule(new JavaTimeModule()); OBJECT_MAPPER.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS); Why this mattered: JavaTimeModule teaches Jackson how to handle java.time types disabling WRITE_DATES_AS_TIMESTAMPS makes unannotated date/time fields serialize as readable strings by default instead of timestamp/array-style output In short: the upgrade exposed hidden consistency gaps. The API layer may be fine, but caches, async flows, and internal serializers can still be operating under very different assumptions. A few takeaways from this upgrade 👇 JDK upgrades are often the easy part date/time migration deserves first-class attention Hibernate 6.6.x is far happier when your model uses java.time custom ObjectMappers are hidden compatibility surfaces consistency across DB, API, cache, and integrations matters more than “it compiles” #Java21 #SpringBoot #Hibernate #BackendEngineering #SoftwareArchitecture #TechLearning
To view or add a comment, sign in
-
Spring Boot DAY 24 – @Entity & @Table Mapping Java Object to Database 👇 In real-world applications, we don’t write SQL for every operation manually. Instead, we map Java objects to Database tables using JPA (Java Persistence API). This is where @Entity and @Table come into play. 🚀 🔹 @Entity 👉 @Entity tells JPA: "This Java class represents a table in the database." Once a class is marked as @Entity, JPA (with providers like Hibernate) manages it. Example: @Entity public class Employee { @Id private Long id; private String name; private double salary; } ✔ Each object of Employee becomes one row in the database. ✔ Each field becomes a column. 🔹 @Table 👉 @Table is used to specify the exact table name in the database. By default, JPA uses the class name as the table name. But if your table name is different, you can customize it. Example: @Entity @Table(name = "employee_details") public class Employee { } Now: Java Class → Employee Database Table → employee_details 🔥 How ORM Works ORM (Object Relational Mapping) connects: 👉 Object world (Java classes) ↔ 👉 Database world (Tables & Rows) Thanks to JPA + Hibernate: ✔ No need to write boilerplate SQL ✔ Clean and readable code ✔ Easy CRUD operations ✔ Database independence 💡 Why This Is Important? @Entity is the foundation of JPA. Without it, Spring Boot cannot map your class to the database. It is the first step toward building: ✔ Enterprise applications ✔ REST APIs ✔ Scalable backend systems 🧠 Quick Summary 🔹 @Entity → Marks class as DB entity 🔹 @Table → Defines table name 🔹 Each object = One row 🔹 Each field = One column 🔹 Core concept of ORM 🔖 hashtag#SpringBoot hashtag#JPA hashtag#Hibernate hashtag#ORM hashtag#Java hashtag#BackendDevelopment Activate to view larger image,
To view or add a comment, sign in
-
-
🚀 Day 3 of My Advanced Java Journey – Mastering CRUD Operations in JDBC Today, I implemented one of the most important concepts in backend development — CRUD operations using JDBC. 🔹 What is CRUD? CRUD stands for: Create → Insert data Read → Fetch data Update → Modify existing data Delete → Remove data 🔹 1. Create (INSERT) Used to add records into the database. ✔️ Key concept: Using PreparedStatement for inserting values safely. String sql = "INSERT INTO employees(name, designation, salary) VALUES (?, ?, ?)"; PreparedStatement ps = conn.prepareStatement(sql); ps.setString(1, "Vamsi"); ps.setString(2, "Software Engineer"); ps.setDouble(3, 60000); ps.executeUpdate(); 🔹 2. Read (SELECT) Used to retrieve and display data. ✔️ Key concept: Using ResultSet to iterate through records. Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("SELECT * FROM employees"); while(rs.next()){ int id = rs.getInt("id"); String name = rs.getString("name"); String designation = rs.getString("designation"); double salary = rs.getDouble("salary"); } 🔹 3. Update (UPDATE) Used to modify existing records. String sql = "UPDATE employees SET salary = ? WHERE id = ?"; PreparedStatement ps = conn.prepareStatement(sql); ps.setDouble(1, 65000); ps.setInt(2, 1); ps.executeUpdate(); 🔹 4. Delete (DELETE) Used to remove records from the database. String sql = "DELETE FROM employees WHERE id = ?"; PreparedStatement ps = conn.prepareStatement(sql); ps.setInt(1, 1); ps.executeUpdate(); 🔍 What I explored beyond the session Why PreparedStatement is preferred over Statement (prevents SQL Injection 🔐) Difference between executeQuery() and executeUpdate() Importance of handling exceptions (SQLException) Closing resources (Connection, Statement, ResultSet) to avoid memory leaks 💡 CRUD operations form the core of any real-world application, from simple apps to enterprise systems. 🙌 Special thanks to the amazing trainers at TAP Academy: kshitij kenganavar Sharath R MD SADIQUE Bibek Singh Hemanth Reddy Vamsi yadav Harshit T Ravi Magadum Somanna M G Rohit Ravinder TAP Academy 📌 Learning in public. Building consistency every day. #Java #AdvancedJava #JDBC #BackendDevelopment #LearningInPublic #VamsiLearns
To view or add a comment, sign in
-
-
🚀 Day 4 of My Advanced Java Journey – PreparedStatement in JDBC Today, I learned one of the most important concepts in JDBC — PreparedStatement, which makes database operations more secure and efficient. 🔹 What is PreparedStatement? A PreparedStatement is used to execute SQL queries with dynamic values using placeholders (?). It helps in writing cleaner, reusable, and secure database code. 🔹 Steps to Use PreparedStatement 1️⃣ Load the Driver Load the JDBC driver class. 2️⃣ Establish Connection Connect to the database using URL, username, and password. 3️⃣ Create PreparedStatement Write SQL query with placeholders (?): String query = "INSERT INTO employee (id, name, desig, salary) VALUES (?, ?, ?, ?)"; PreparedStatement pstmt = con.prepareStatement(query); 4️⃣ Set Parameter Values Assign values using setter methods: pstmt.setInt(1, id); pstmt.setString(2, name); pstmt.setString(3, desig); pstmt.setInt(4, salary); 5️⃣ Execute Query int rows = pstmt.executeUpdate(); 🔹 Batch Processing (Multiple Inserts) Used to insert multiple records efficiently in one go. do { pstmt.setInt(1, scan.nextInt()); pstmt.setString(2, scan.next()); pstmt.setString(3, scan.next()); pstmt.setInt(4, scan.nextInt()); pstmt.addBatch(); System.out.println("Add more? (yes/no)"); s = scan.next(); } while(s.equalsIgnoreCase("yes")); int[] result = pstmt.executeBatch(); 🔹 Important Methods setInt(), setString(), setFloat() → Set values executeUpdate() → Insert/Update/Delete addBatch() → Add queries to batch executeBatch() → Execute all at once 🔍 What I explored beyond the session PreparedStatement prevents SQL Injection attacks 🔐 Precompiled queries improve performance Difference between Statement and PreparedStatement Importance of closing resources (Connection, PreparedStatement) Using try-with-resources for better memory management 💡 PreparedStatement is a must-know concept for writing secure and optimized database applications in Java. 🙌 Special thanks to the amazing trainers at TAP Academy: kshitij kenganavar Sharath R MD SADIQUE Bibek Singh Vamsi yadav Hemanth Reddy Harshit T Ravi Magadum Somanna M G Rohit Ravinder TAP Academy 📌 Learning in public. Improving every single day. #Java #AdvancedJava #JDBC #PreparedStatement #BackendDevelopment #LearningInPublic #VamsiLearns
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
That jump from syntax to building something real is where things start to click. What stands out is you picked up patterns like DAO and connection pooling early, which many developers only understand after hitting performance or maintenance issues. Adding things like BCrypt and basic validation next will make it even closer to production-grade.