🚀 Day 20/100: Data Types Deep Dive – Precision, Size & Memory 📊🧠 Today’s learning focused on the science behind data storage in Java. Writing efficient code is not just about logic—it’s about choosing the right data type to optimize memory usage and performance. Here’s a structured breakdown of what I explored: 🏗️ 1. Primitive Data Types – The Core Building Blocks These are predefined types that store actual values directly in memory. 🔢 Numeric (Whole Numbers): byte → 1 byte | Range: -128 to 127 short → 2 bytes | Range: -32,768 to 32,767 int → 4 bytes | Standard integer type long → 8 bytes | Used for large values (L suffix) 🔢 Numeric (Floating-Point): float → 4 bytes | Requires f suffix double → 8 bytes | Default for decimal values 🔤 Non-Numeric: char → 2 bytes | Stores a single Unicode character boolean → JVM-dependent | Represents true or false 🏗️ 2. Non-Primitive Data Types – Reference Types These types store references (memory addresses) rather than actual values: String → Sequence of characters Array → Collection of similar data types Class & Interface → Blueprint for objects 💡 Unlike primitives, their default value is null, and they reside in Heap memory, with references stored in the Stack. 🧠 Key Insight: Primitives → Store actual values (Stack memory) Non-Primitives → Store references to objects (Heap memory) ⚙️ Why This Matters: Choosing the correct data type improves: ✔️ Memory efficiency ✔️ Application performance ✔️ Code reliability at scale 📈 Today reinforced that strong fundamentals in data types are essential for writing optimized, production-ready Java applications. #Day20 #100DaysOfCode #Java #Programming #MemoryManagement #DataTypes #SoftwareEngineering #CodingJourney #JavaDeveloper #10000Coders
Java Data Types: Primitive & Non-Primitive Explained
More Relevant Posts
-
Designed and implemented a modular ETL pipeline in Python to extract data from a REST API, transform and normalize JSON structures, and load processed data into PostgreSQL using SQLAlchemy. Focused on clean separation of pipeline stages and scalable architecture. Tech: Python, Pandas, SQLAlchemy, PostgreSQL. Link => https://lnkd.in/dWNjvx9n
To view or add a comment, sign in
-
🚀 DSL vs @Query in Spring Data JPA While working with Spring Data JPA, I learned that by default it provides methods to work with the primary key like findById(). But what if we want to fetch data using other fields like name, age, etc.? 🤔 We have two approaches 👇 🔹 1. Domain-Specific Language (DSL) List<User> findByName(String name); ✔️ Method Naming Convention ✔️ Query is automatically generated ✔️ Easy to write and read ✔️ Best for simple queries 🔹 2. @Query Annotation @Query("SELECT u FROM User u WHERE u.name = :name") List<User> getUserByName(String name); ✔️ Query is written manually (JPQL/SQL) ✔️ More flexibility ✔️ Best for complex queries (joins, multiple conditions) 💡 Key Difference: DSL → Simple & automatic @Query → Flexible & customizable 🎯 Conclusion: Use DSL for quick and simple queries, and switch to @Query when you need more control. #Java #SpringBoot #SpringDataJPA #BackendDevelopment #Coding #Developers #Learning
To view or add a comment, sign in
-
-
Most people don’t struggle with SQL… 🧠 They struggle with how they think about it. I used to write SQL like this: SELECT → JOIN → WHERE → Done. Then I learned to ask one simple question: “Can SQL solve this before I write more code?” Everything changed. 🚀 JOINs replaced manual loops GROUP BY replaced tedious manual calculations Window Functions replaced complex application logic SQL stopped being just a query language… it became a superpower. ⚡ Whether you are navigating DQL (Querying), DML (Manipulation), or DDL (Definition), this chart (attached) is the roadmap to thinking like a Senior Developer. #SQL #DataEngineering #SoftwareDevelopment #Database #DataScience #Coding #TechTrends2026 #Programming #BigData #Analytics #BackendDevelopment #Python #WebDev #EngineeringMindset #TechCareer
To view or add a comment, sign in
-
-
𝐃𝐚𝐲 𝟓𝟖 – 𝐃𝐒𝐀 𝐉𝐨𝐮𝐫𝐧𝐞𝐲 | 𝐀𝐫𝐫𝐚𝐲𝐬 🚀 Today’s problem focused on finding two numbers in a sorted array that add up to a target. 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐒𝐨𝐥𝐯𝐞𝐝 • Two Sum II – Input Array Is Sorted 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡 • Used two pointers: • One at the beginning (left) • One at the end (right) • Calculated sum of both elements Logic: • If sum == target → return indices • If sum < target → move left pointer forward • If sum > target → move right pointer backward This works because the array is already sorted. 𝐊𝐞𝐲 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠𝐬 • Sorting enables two-pointer optimization • Two pointers reduce time complexity from O(n²) to O(n) • Direction of movement depends on comparison with target • Index-based problems often become simpler with sorted data 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲 • Time: O(n) • Space: O(1) 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲 When data is sorted, two pointers can turn a complex problem into a simple one. 58 days consistent 🚀 On to Day 59. #DSA #Arrays #TwoPointers #LeetCode #Java #ProblemSolving #DailyCoding #LearningInPublic #SoftwareDeveloper
To view or add a comment, sign in
-
-
Hello world! I’ve built a small repository that can serve as a simple example of how to create an ETL pipeline from scratch using Python: Movie Pipeline 🎬 It ingests, cleans, and combines movie data from multiple providers into a unified and queryable dataset. The project follows a clear ETL approach: - Dedicated extractors and transformers per provider. - Data normalization for consistent joins. - Proper handling of nulls and duplicates. - A scalable design to easily add new data sources. I also included ideas for handling historical data using an SCD2 approach, which is useful for tracking how metrics evolve over time. It’s a simple but practical example that could be helpful if you’re starting with data pipelines or want a lightweight reference. Happy to hear any feedback! https://lnkd.in/enTaN-jc
To view or add a comment, sign in
-
💡 𝐉𝐚𝐯𝐚 𝐒𝐭𝐫𝐞𝐚𝐦𝐬: 𝐦𝐨𝐫𝐞 𝐭𝐡𝐚𝐧 𝐣𝐮𝐬𝐭 𝐟𝐚𝐧𝐜𝐲 𝐥𝐨𝐨𝐩𝐬 If you're still using for loops everywhere, you're probably leaving readability (and sometimes performance) on the table. Java Streams bring a declarative approach to data processing — you describe what you want, not how to iterate. 🔹 𝐇𝐨𝐰 𝐢𝐭 𝐰𝐨𝐫𝐤𝐬 Streams process data in a pipeline: Source → Collection, array, etc. Intermediate ops → map, filter, sorted Terminal ops → collect, forEach, reduce 🔹 𝐄𝐱𝐚𝐦𝐩𝐥𝐞 List<String> names = List.of("Ana", "Bruno", "Carlos", "Amanda"); List<String> result = names.stream() .filter(name -> name.startsWith("A")) .map(String::toUpperCase) .sorted() .toList(); 🔹 𝐊𝐞𝐲 𝐦𝐞𝐭𝐡𝐨𝐝𝐬 filter() → select data map() → transform data flatMap() → flatten nested structures reduce() → aggregate values collect() → build results 🔹 𝐖𝐡𝐲 𝐢𝐭 𝐦𝐚𝐭𝐭𝐞𝐫𝐬 ✔ Cleaner and more expressive code ✔ Easy parallelization with .parallelStream() ✔ Encourages immutability and functional style ⚠️ 𝐁𝐮𝐭 𝐛𝐞𝐰𝐚𝐫𝐞: Streams are powerful — not always faster. Overusing them in hot paths can hurt performance. 👉 𝐑𝐮𝐥𝐞 𝐨𝐟 𝐭𝐡𝐮𝐦𝐛: 𝐔𝐬𝐞 𝐒𝐭𝐫𝐞𝐚𝐦𝐬 𝐟𝐨𝐫 𝐜𝐥𝐚𝐫𝐢𝐭𝐲 𝐟𝐢𝐫𝐬𝐭, 𝐨𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 𝐥𝐚𝐭𝐞𝐫. #Java #SoftwareEngineering #CleanCode #TechTips #Backend
To view or add a comment, sign in
-
A Spark Config That Took Me Days to Find True story. I had a Spark job that worked perfectly in testing. Deployed it to production. The numbers were just... wrong. No crash. No error message. Just wrong output. I spent days checking the logic. Rewriting queries. Adding logging everywhere. Nothing. Turns out the problem had nothing to do with my code. Spark has a feature called "whole-stage code generation." Behind the scenes, it compiles your SQL into Java code to make it run faster. Cool, right? Except Java has a size limit on how big a single piece of compiled code can be. When your data has a lot of columns (we're talking 20-30+), the generated code gets too big. Spark doesn't tell you. It just quietly produces garbage. The fix? One line of config: spark.sql.codegen.wholeStage = false That's it. It tells Spark: "don't try to be clever with code generation, just process it the normal way." Slightly slower. But correct every time. I share this because I know someone out there is debugging the same thing right now, staring at perfectly fine-looking code, wondering why the numbers don't add up. Check your Spark codegen. It might save you a weekend. #ApacheSpark #DataEngineering #Debugging #TechTips #PySpark #RealTalk
To view or add a comment, sign in
-
🚀 Day 59: Diving into Arrays — The Foundation of Data Structures 📊 Today, I shifted my focus from OOP design back to the core of data handling in Java: Arrays. Understanding how to store and manage collections of data efficiently is where the real logic begins! 1. What is an Array? An array is a fixed-size, contiguous block of memory that stores multiple elements of the same data type. It’s the simplest way to group related data (like a list of 100 integers) under a single variable name. 2. Ways to Declare an Array 📝 I learned that Java gives us flexibility in how we set them up: ▫️ Declaration & Instantiation: int[] numbers = new int[5]; (Creating an empty "shelf" with 5 slots). ▫️ Inline Initialization: int[] numbers = {10, 20, 30, 40}; (Creating the shelf and filling it at the same time). 3. Accessing & Assigning Values 🔑 The Index Rule: Java arrays are zero-indexed, meaning the first element is at index 0. ▫️ Assigning: Use the index to target a specific slot: numbers[0] = 99; ▫️ Accessing: Retrieve the data just as easily: System.out.println(numbers[0]); 💡 My Key Takeaway: The biggest "catch" with arrays is that they are fixed in size. Once you define an array of 5, you can't suddenly make it 6. This makes them incredibly fast for memory access but requires careful planning during the design phase! Question for the Developers: We all start with Arrays, but at what point in your projects do you usually decide to switch to an ArrayList? Is it always about the dynamic size, or are there other factors? 👇 #Java #DataStructures #Arrays #100DaysOfCode #BackendDevelopment #CodingFundamentals #CleanCode #LearningInPublic #JavaDeveloper 10000 Coders Meghana M
To view or add a comment, sign in
-
Stop the Race: Solving Data Inconsistency in Concurrent Systems Building a "working" application is easy. Building a reliable one is hard. I recently spent time diving into the world of Concurrency and Data Integrity using Python and SQL. One of the most common (and dangerous) bugs in software is the "Race Condition"—where two processes try to update the same data at the same time, leading to "lost updates" and corrupted balances. I simulated a high-traffic banking system to see how data inconsistency happens and, more importantly, how to stop it. The Solution: A Two-Pronged Defense Application-Level Locking: Using Python’s threading.Lock to create "Mutual Exclusion" (Mutex). This ensures that only one thread can access the critical "Read-Modify-Write" logic at a time. Database-Level Integrity (ACID): Moving the logic into a relational database (PostgreSQL/SQLite) to leverage Atomicity and Isolation. By using BEGIN, FOR UPDATE, and COMMIT statements, the database acts as the ultimate gatekeeper for data truth. Key Takeaways: Transactions are Non-Negotiable: If it’s not Atomic (all-or-nothing), it’s not safe. The "with" Statement is a Lifesaver: Using context managers in Python ensures locks are released even if the code crashes, preventing deadlocks. Scalability Matters: While local locks work for one server, ACID-compliant databases are essential for distributed systems. Check out the snippet of my GitHub Codespaces setup below! https://lnkd.in/eguenR7g #Python #SoftwareEngineering #SQL #Database #Coding #DataIntegrity #BackendDevelopment #GitHub
To view or add a comment, sign in
-
-
🧩 Building Strong Foundations: Python for Data Validation Before jumping into ETL testing, mastering the basics is critical. Here’s what the first phase of the journey looks like 👇 🔹 Start with core Python concepts: - Data types, lists, dictionaries - Loops and conditional logic - Functions for reusable validation rules 🔹 Move into data handling: - Reading CSV/JSON files - Using Pandas for data manipulation - Handling missing values & duplicates 💡 Detect duplicate records data = [1,2,2,3] print(len(data) != len(set(data))) 💡 Basic data validation rule def validate_null(val): return val is None These simple checks are the building blocks of real-world data quality frameworks. 🎯 The goal here is not just coding… …it’s thinking like a data tester. What can go wrong with data? How do I catch it early? Next step → Applying these skills to ETL validation scenarios. Follow Khushboo Gupta for more. #PythonForData #DataValidation #Pandas #DataAnalytics #ETL #DataEngineering #TechSkills #Upskilling #LearningJourney
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development