Thread pool types : 1. Fixed Thread Pool (This has a fixed number of threads.) Method: Executors.newFixedThreadPool(int n) :- internally LinkedBlockingQueue data Structure uses. Example:- steady load where you want to strictly limit resource usage. 2. Cached Thread Pool : (creates new threads as needed but reuses existing threads if available . If thread is idle for 60 seconds, it is terminated). Method: Executors.newScheduledThreadPool(int corePoolSize) :- internally SynchronousQueus data Structure uses. Example:- Applications with many short-lived asynchronous tasks. (Push Notifications & SMS Alerts) 3. Scheduled Thread Pool : This pool can schedule to run after a given delay or to execute periodically. Method: Executors.newScheduledThreadPool(int corePoolSize) Example: Background cleanup tasks, heartbeat signals, or polling. Example:- Tasks that must be processed one at a time in a specific order (e.g., event sequencing). 4. Single Thread Executor : single worker thread to execute all tasks. It guarantees that tasks are executed sequentially Method: Executors.newSingleThreadExecutor() :- internally LinkedBlockingQueue data Structure uses. (Ledger Accounting) #Java #BackendDevelopment #SoftwareEngineering #MultiThreading #Concurrency #JavaPerformance #CodingTips #Programming #SystemDesign
Ajay Kumar’s Post
More Relevant Posts
-
A recent issue reminded me that performance optimizations can sometimes become production problems. We had an API that: 1️⃣ Fetches initial details 2️⃣ Extracts IDs from the response 3️⃣ Makes another database call to fetch larger secondary data To speed up step 3, parallel processing was introduced using a fixed thread pool. Sounds reasonable — until load testing began. Under heavy traffic, thread creation kept increasing across instances until limits were hit, leading to: ⚠️ "Can't create new native thread" The interesting part? The optimization worked for individual requests. But at scale, the resource model didn’t. A request with a small number of IDs didn’t always need dedicated worker threads, yet threads were still being allocated repeatedly under concurrent load. The fix was moving to a shared/reusable thread pool model with better resource control. 💡 My takeaway: Code that is fast in isolation may fail under concurrency. When designing for performance, it’s important to ask: - How does this behave at 1 request? - How does this behave at 1000 requests? - What resources grow with traffic? Scalability is often less about speed, more about control. #BackendEngineering #Java #PerformanceTesting #Scalability #Concurrency
To view or add a comment, sign in
-
Multi-threading can silently corrupt your data. 💀 The worst bugs don’t crash your server. They just quietly bankrupt your logic while you sleep. Imagine this: 100% uptime. Lightning-fast latency. Every monitor is green. Then the audit hits. 10,000 transactions were processed, but only 9,920 were recorded. Where did the other 80 go? They weren't "lost." They were murdered by a Race Condition. In high-concurrency systems, like a massive data pipeline, your code starts lying to you. When two threads fight for the same piece of state without proper orchestration, "Lost Updates" happen. No stack trace. No error log. Just a silent, brutal drift in your data that no compiler will ever catch. The amateur move? Panic and slap a global synchronized lock on the logic. The result: You just turned your 10-lane highway into a single-track dirt road. You "fixed" the bug by killing the performance. That isn't engineering, it’s a surrender. If you want to build for scale, you have to move past basic locking and master Atomic Contention. By leveraging the java.util.concurrent toolkit, you stop fighting threads and start orchestrating them: - Atomic State: Swap standard Maps for ConcurrentHashMap. Use .merge(). It handles the "check-then-act" logic at the hardware level. No manual locks. No performance death-spiral. - Managed Execution: Stop spawning raw threads. Use an ExecutorService. Control your resources before they crash your JVM. The result ? A system that is both bulletproof and blazing fast. Zero data loss. 4x throughput improvement. And most importantly, data you can actually trust. The Reality Check: A fast system that gives the wrong answer isn't a "performance win." It’s a liability. If you aren't thinking about atomicity and thread contention, you aren't building a system; you're playing Russian Roulette with your data. #Java #SoftwareEngineering #BackendDevelopment #SystemDesign #Concurrency #HighPerformance #CleanCode #Programming
To view or add a comment, sign in
-
-
1. ThreadLocal: The "Private Locker" Strategy When you use ThreadLocal, each thread gets its own independent copy of a variable. There is no shared data, so there is no contention. The Use Case: Imagine handling multiple money transfer requests. Request 1: Customer C101, Txn ID: TXN1001 Request 2: Customer C202, Txn ID: TXN2001 We use ThreadLocal to store the Transaction ID so that every log or service call within that thread knows which transaction it’s working on without passing it as a method parameter everywhere. public class RequestContext { private static ThreadLocal<String> txnId = new ThreadLocal<>(); public static void setTxnId(String id) { txnId.set(id); } public static String getTxnId() { return txnId.get(); } public static void clear() { txnId.remove(); } // Always clean up! } Why not synchronize here? Because Thread 1 doesn't care about Thread 2's ID. We need Isolation, not locking. 2. Synchronization: The "Gatekeeper" Strategy We use synchronized when threads must access the exact same piece of data (like a bank balance). If two threads try to debit the same account at the exact same time, you’ll end up with incorrect data without a lock. public synchronized void debit(int amount) { if (balance >= amount) { balance -= amount; } } Why not use ThreadLocal here? If each thread had its own "copy" of the balance, the actual account would never be updated globally. We need Consistency, which requires a lock. Key Takeaway: Use ThreadLocal when you want to avoid synchronization overhead for data that is specific to a thread's execution context (e.g., User IDs, DB Connections, Transaction IDs). Use synchronized when threads must modify the same shared resource and you need to ensure data integrity. #Java #BackendDevelopment #SoftwareEngineering #MultiThreading #Concurrency #JavaPerformance #CodingTips #Programming #SystemDesign
To view or add a comment, sign in
-
Topic: Thinking About Data Flow Understanding data flow is key to building better systems. In many applications, issuses arise not from code,but from how data moves between components. Questions to consider: • Where does data originate? • How is it processed? • Where is it stored? • How is it consumed? Clear data flow helps with: • Better system design • Easier debugging • Improved performance Because when you understand the flow, you understand the system. And better understanding leads to better decisions. How do you visualize or track data flow in your systems? #SystemDesign #DataFlow #SoftwareEngineering #BackendDevelopment #Java
To view or add a comment, sign in
-
I spent the last week building a data science environment on my homelab from scratch. 150GB of social media post data, a full congressional record, and a PostgreSQL instance tuned to eat all of it. Here's how it went. The dataset was 1.7 million tiny JSONL files. Python can parse fast, but opening 1.7M files means 1.7M syscalls — that's the actual bottleneck, not the data size. So I skipped Python entirely for the merge step. One bash pipeline: find ... -name '*.jsonl' -print0 | xargs -0 cat | split -C 1G cat and split are C programs doing buffered I/O. No interpreter overhead, no per-file open/close. That turned 1.7M files into ~150 clean 1GB chunks in about 3 hours. From there, Python took over. Each chunk gets read with orjson, loaded into PostgreSQL via COPY over a Unix socket — no TCP overhead. The table uses declarative range partitioning by week, 208 partitions spanning 2023–2026, all managed through SQLAlchemy and Alembic migrations. The ingestion pipeline uses a staging table pattern: COPY a batch into a temp table, then INSERT INTO ... SELECT ... ON CONFLICT DO NOTHING into the partitioned table. When a batch fails, it splits in half and retries recursively until it isolates the single bad row, which gets logged to a failed_ingestion table. No silent data loss, no full-batch failures. ALL of this planning meant 250 million rows ingested in about 30 minutes. Same database also holds a full congressional dataset — 164,753 bills with their full text, vote records, legislator profiles, and social media accounts. Proper relational models with foreign keys and cascading deletes, loaded from congress-tracker YAML/JSON sources. One of the things this led to was moving my PostgreSQL WAL to its own ZFS dataset on Optane drives — because I was seeing 1GB/s writes when I ingested the first time. All open source: https://lnkd.in/eu2xm2Md I needed all of this data because Matt, the data scientist I'm working with, is a wonderful crazy person that said we need more data. #DataEngineering #PostgreSQL #ZFS #NixOS #Python #Infrastructure
To view or add a comment, sign in
-
🚀 Day 6 – HashMap vs ConcurrentHashMap (When Thread Safety Matters) Today I explored the difference between "HashMap" and "ConcurrentHashMap". We often use "HashMap" like this: Map<String, Integer> map = new HashMap<>(); 👉 But here’s the catch: "HashMap" is not thread-safe In a multi-threaded environment: - Multiple threads modifying it can lead to data inconsistency - Even cause infinite loops during resizing (rare but critical) So what’s the alternative? Map<String, Integer> map = new ConcurrentHashMap<>(); 👉 "ConcurrentHashMap" is designed for safe concurrent access 💡 Key difference I learned: ✔ "HashMap" - No synchronization - Faster in single-threaded scenarios ✔ "ConcurrentHashMap" - Uses segment-level locking / fine-grained locking - Allows multiple threads to read/write safely ⚠️ Insight: Instead of locking the whole map, it locks only a part of it → better performance than traditional synchronization. 💡 Real-world use: Whenever multiple threads are accessing shared data (like caching, session data), "ConcurrentHashMap" is a safer choice. #Java #BackendDevelopment #Concurrency #JavaInternals #LearningInPublic
To view or add a comment, sign in
-
Day 86 – Binary Tree Level Order Traversal Worked on traversing a binary tree level by level using a queue-based approach (Breadth-First Search). Key Learnings: Learned to use a queue (FIFO) for level-wise traversal of trees Understood how to process nodes layer by layer using queue size Strengthened understanding of tree traversal patterns and data structure usage #DSA #Java #BinaryTree #BFS #Queue #ProblemSolving #CodingPractice
To view or add a comment, sign in
-
-
🚀 08/04/26 — Stack Foundations: Balancing Parentheses with Precision Today was a productive session where I successfully implemented the Valid Parentheses (LeetCode 20) challenge. This problem is a fundamental exercise in using the Stack data structure to manage nested relationships and maintain order-based logic. 🧱 The Valid Parentheses Logic The goal is to determine if an input string containing (, ), {, }, [ and ] is valid based on whether every open bracket is closed by the correct type and in the correct order. The Stack Strategy: Pushing: As I iterate through the string, whenever I encounter an opening bracket—(, {, or [—I push it onto the stack. Popping and Matching: When I encounter a closing bracket, I first check if the stack is empty. If it is, the string is invalid. Otherwise, I pop the top element from the stack and compare it to the current closing bracket. Validation: If the brackets don't match (e.g., a ) following a {), the function immediately returns false. Final Check: After the loop finishes, the string is only valid if the stack is completely empty, ensuring all open brackets were properly closed. Complexity Metrics: Time Complexity: O(n) where n is the length of the string, as we perform a single linear pass. Space Complexity: O(n) in the worst case where the string contains only opening brackets, requiring them all to be stored in the stack. 📈 Consistency Report Coming off my 50-day streak milestone, today's focus on stacks feels like a solid pivot from the sliding window and array patterns I've been mastering recently. The logic used here is remarkably similar to the structural checks I used in the "String Search" and "Mountain Array" problems earlier this month, where maintaining a specific state across iterations was key. Huge thanks to Anuj Kumar (a.k.a CTO Bhaiya on YouTube) for the continuous inspiration. Every new data structure I master adds another layer to my problem-solving toolkit! My tested O(n) stack-based implementation is attached below! 📄👇 #DSA #Java #Stack #ValidParentheses #DataStructures #Complexity #Consistency #LearningInPublic #CTOBhaiya
To view or add a comment, sign in
-
-
🚀Day 16 of #128DaysOfCode Solved a classic stack-based problem today! 🔍 Problem: Validate whether a string containing brackets "() {} []" is properly balanced. 💡 Approach: Used a Stack (LIFO) to track opening brackets and match them with corresponding closing brackets. - Push opening brackets - On closing bracket → check top of stack - If mismatch or stack is empty → invalid - At the end, stack should be empty This problem highlights how stacks are perfect for handling nested structures and order-based validation 🧠 Key Learnings: ✔ Strengthened understanding of Stack data structure ✔ Learned how to handle edge cases like mismatched and unordered brackets ✔ Improved problem-solving approach for string-based questions ⏱ Complexity: Time → O(n) Space → O(n) Consistency is the key 🔥 On to Day 17 💪 #DSA #Java #LeetCode #Stack #ProblemSolving #CodingJourney #PlacementsPreparation
To view or add a comment, sign in
-
-
🚀 New Video: Why @Transactional is Important in Spring Boot What happens if: ✔ Employee is saved ❌ IdCard fails 👉 You get inconsistent data This is where @Transactional saves you. 💡 Simple idea: Either everything succeeds… or nothing is saved. In this video, I show: ✔ Real problem (partial data save) ✔ How rollback works ✔ Why transactions are critical in real systems 🎥 Watch here: https://lnkd.in/dN3Duxnj #SpringBoot #JPA #Java #BackendDevelopment #Hibernate
Why @Transactional is Important in Spring Boot? (Fix Data Inconsistency)
https://www.youtube.com/
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development