This week I stopped just solving problems and started actually understanding my tools. the thing nobody tells you early on: you can know the logic perfectly and still write terrible code because you're reinventing what already exists. that was me. so this week was all about STL C++ Standard Template Library. what is STL and why does it matter? STL is a collection of ready made data structures and algorithms built into C++. instead of manually building a hashmap or a dynamic array from scratch, you use what's already optimized and battle-tested. map, unordered_map, vector, stack, queue, set these aren't just containers. knowing which one to use and when is the difference between a clean O(n) solution and a messy O(n²) one. in a real interview, you don't have time to build from scratch. you need to know your tools. what I actually worked on this week: → map vs unordered_map ordered vs O(1) lookup tradeoffs → adjacency list using map<int, vector<int>> → prefix sum pattern → combining hashmap + modular arithmetic together problems solved: → LRU Cache (medium) finally understood how to combine hashmap + doubly linked list → Sum of Distances (medium) → Make Sum Divisible by P (medium) → Minimum Operations to Make Array Sum Divisible by K stuck on: LFU Cache LRU felt hard until it clicked. LFU is a whole different beast. still working on it. the honest part: "Make Sum Divisible by P" took me 2.5+ hours. I got TLE, then WA, fixed both, and finally understood why the solution works. slow? yes. but I didn't copy a solution I earned it. my LeetCode if you want to see the journey: https://lnkd.in/ghKx4CgM now a genuine question for the experienced folks here when you were learning DSA, how did you balance depth vs speed? spending 2-3 hours on one problem to fully understand it, or timebox it and move on? would love brutal honest takes. drop it in the comments 👇 #LeetCode #DSA #CPP #STL #LearningInPublic #BackendDevelopment #SoftwareEngineering #100DaysOfCode
Mastering STL in C++ for Efficient Coding
More Relevant Posts
-
🚀 Day 15 of My DSA Journey | Top K Frequent Elements 🔥 Today, I solved one of the most important and frequently asked problems on LeetCode — Top K Frequent Elements. 💡 Problem Understanding Given an integer array, we need to return the k most frequent elements. Sounds simple, but the challenge is to do it efficiently ⚡ 🧠 Brute Force Approach Count frequency using loops Sort elements based on frequency Pick top k ⛔ Time Complexity: O(n² log n) (inefficient for large inputs) ⚡ Optimized Approach (HashMap + Sorting) Here’s what I did: ✅ Step 1: Use a HashMap to store frequency of each element ✅ Step 2: Convert map into array of (frequency, element) pairs ✅ Step 3: Sort array in descending order of frequency ✅ Step 4: Pick first k elements 📌 Example Walkthrough Input: nums = [1,1,1,2,2,3], k = 2 Frequency Map: 1 → 3 times 2 → 2 times 3 → 1 time Sorted: (3,1), (2,2), (1,3) Output: [1,2] ⏱ Complexity Analysis Time: O(n log n) (due to sorting) Space: O(n) 🔥 Key Learning HashMap is super powerful for frequency problems Converting data into sortable structure simplifies logic Always think: Can I reduce nested loops? 🙏 Thanks to my mentor and consistency mindset — improving every single day 💪Day by day, problem by problem — becoming better than yesterday 🚀 #Day15 #DSAJourney #LeetCode #Java #Coding #ProblemSolving #HashMap #Sorting #TopKFrequent #Consistency #LearningInPublic
To view or add a comment, sign in
-
-
I built an MCP server that roasts your pull requests You know that PR you shipped on Friday at 5pm with the description "misc fixes"? Yeah, this tool has opinions about that. pr-roast-mcp is an MCP server that reads any GitHub PR - the diff, the stats, the description (or lack thereof) - and delivers a brutally honest code review. With a severity rating from 🔥 to 🔥🔥🔥🔥🔥. ▎ "Your tests are thorough. Like, suspiciously thorough. 156 lines for a POST endpoint? ▎ You're basically writing a dissertation on HTTP status codes." ▎ ▎ "849 lines added, 7 removed. That's 121:1 ratio. For a 'bonus feature,' this ▎ sprawls." ▎ It's always technically accurate though. Every roast points at real issues -naming, complexity, missing edge cases, over-engineering. It just delivers the feedback the way your most senior engineer would after their third coffee. It always ends with one genuine compliment. Mine was about rounding edge cases in bonus calculations. Small wins. Two tools, ~150 lines of Python: - roast_pr - point it at any PR number or URL - roast_my_prs - lists your PRs so you can pick a victim Uses gh CLI to fetch the diff, Claude Haiku for the roast. Setup is one line. We've been using it in our team Slack before merges. Morale has either improved or collapsed, depending on who you ask. Code: https://lnkd.in/gHcZFTqB #buildInPublic #AI #claude #haiku #MCP #Python #DevTools #CodeReview #OpenSource
To view or add a comment, sign in
-
-
🚀 rst-queue v0.1.6: Scaling Terabytes with Megabytes In a world of bloated data systems, we often find ourselves throwing more hardware at software problems. But what if our tools were engineered to be small, grounded, and incredibly powerful? Introducing rst-queue v0.1.6, a high-performance async queue system built for the modern developer who values efficiency above all else. Inspired by the psychology of the Leafcutter Ant, this project is the first major release from the Datarn initiative. Why rst-queue? Most Python-based queues are limited by the Global Interpreter Lock (GIL) and high memory overhead. rst-queue is different. By using Rust and the Crossbeam framework, we’ve built a system that: ⚡ Bypasses the GIL: Achieve true parallelism with native Rust worker pools. 🐜 Microscopic Footprint: 30-50x less memory usage than traditional message brokers. 🛡️ Dual Modes: Choose between AsyncQueue (In-memory for 1M+ items/sec) or the new AsyncPersistenceQueue (Durable storage with Sled KV). Grounded in the Kernel The secret to our speed is "Simple OS Layering." We’ve designed rst-queue to sit as close to the OS kernel as possible, utilizing direct system calls and memory-mapped I/O. This isn't just a library; it's a high-velocity data crossing (Taran) for your most critical applications. Get Started in Seconds We believe in zero-setup excellence. You can add high-performance queuing to your Python project with a single command: Bash pip install rst-queue==0.1.6 Join the Datarn Movement At Datarn, we are building a suite of "Small but Mighty" tools for data-intensive domains like B2B e-commerce and real-time analytics. rst-queue is just the beginning. Explore the project on PyPI: https://lnkd.in/d54yqdea Contribute on GitHub: https://lnkd.in/d_x3E-zj #Python #RustLang #DataEngineering #OpenSource #Efficiency #Datarn #PerformanceOptimization #SoftwareArchitecture
To view or add a comment, sign in
-
-
LeetCode Daily | Day 78 🔥 LeetCode POTD – 3488. Closest Equal Element Queries (Medium) ✨ 📌 Problem Insight Given a circular array: ✔ For each query index, find nearest same value ✔ Distance is circular ✔ Return minimum distance ✔ If no same value exists → return -1 🔍 Initial Thinking – Brute Force ⚙️ 💡 Idea: ✔ For each query, scan entire array ✔ Check all indices with same value ⚠️ Problem: ✔ O(n) per query → too slow ✔ Total becomes O(n²) in worst case 💡 Key Observation 🔥 ✔ Same values repeat → group their indices ✔ Nearest answer lies among adjacent indices in that group ✔ Circular distance: → min(|i - j|, n - |i - j|) 🚀 Optimized Approach ✔ Store indices for each value (hash map) ✔ For each query: → Use binary search to find position → Check nearest left & right indices ✔ Handle circular wrap using modulo 🔧 Core Idea ✔ Reduce search space using grouping ✔ Use binary search for nearest neighbors ✔ Apply circular distance formula ⏱ Complexity ✔ Time: O(n + q log n) ✔ Space: O(n) 🧠 Key Learning ✔ Nearest element problems → check neighbors, not all ✔ Preprocessing (grouping) can drastically optimize queries ✔ Circular arrays often need wrap-around handling 🚀 Takeaway A great mix of hashing + binary search + circular logic — classic interview pattern to reduce brute force into efficient queries ⚡ #LeetCode #DSA #Algorithms #CPlusPlus #ProblemSolving #CodingJourney
To view or add a comment, sign in
-
-
Day 21 of 100 Completed Today shifted focus toward core data structures while continuing revision - building stronger fundamentals in linked lists. • #206 - Reverse Linked List - solved • Studied Linked List & Doubly Linked List basic operations • Continued revision of previous topics 🔎 Focus Areas • Pointer manipulation and traversal logic • Understanding structure of singly vs doubly linked lists • Strengthening fundamentals through revision 💡 Key Takeaways (DSA) 📌 #206 Reverse Linked List This problem is all about pointer control: keep track of previous, current, next reverse links step by step without losing references clean logic matters more than complexity here 📌 Linked List & Doubly Linked List Basics Singly LL → one-directional traversal Doubly LL → extra back pointer for flexibility operations like insertion, deletion, traversal depend heavily on pointer accuracy Key insight: Linked Lists are simple in theory, but easy to mess up if pointer handling isn’t precise. 🚀 Revision Continued revising earlier topics to strengthen retention. 💡 Key Takeaways • Concepts feel more stable with repetition • Better clarity in choosing approaches • Still improving speed and confidence ⚡ Honest Reflection This was a foundational day. Not flashy, but important. Pointer-based problems require precision, and I’m still building that muscle. Mistakes are happening, which means there’s room to improve. Revision + fundamentals together is a good move right now. Consistency is intact. Base is getting stronger. Patterns recognized: Linked List | Doubly Linked List | Pointer Manipulation | Reversal | Traversal | Fundamentals Reinforcement #100DaysOfCode #DSA #Python #LinkedList #LeetCode #BuildInPublic #CodingJourney #Consistency
To view or add a comment, sign in
-
-
Shallow Copy vs Deep Copy — The 2 AM Bug Trap 🛑 Most developers think they understand copying objects, until their original data mysteriously changes. That’s not a bug, that’s memory behavior biting you. → Shallow Copy Creates a new container, but nested objects are still shared (by reference) 👉 Change nested data → both copies change. Best for: Flat, simple data. → Deep Copy Creates a completely independent clone, everything is copied recursively. 👉 Change anything → original stays untouched Best for: Complex, nested structures. 💡 Rule of Thumb Shallow → when you only need a surface-level copy Deep → when you need true isolation ⚠️ The real trap: Most bugs aren’t syntax errors. They come from not understanding how data behaves in memory. If you’ve ever spent hours debugging only to realize it was a shallow copy issue. Welcome to the club 😄 #Python #Python3 #Programming #SoftwareEngineering #CleanCode #Debugging #TechTips #PythonDeveloper #BackendDevelopment
To view or add a comment, sign in
-
-
🚀 Day 344 of solving 365 medium questions on LeetCode! 🔥 Today’s challenge: “89. Gray Code” ✅ Problem: You are given an integer n. Your goal is to generate an n-bit Gray code sequence, which is an array of 2^n integers where every adjacent pair of numbers (including the first and last numbers) differs by exactly one single bit in their binary representation. ✅ Approach (Bit Manipulation / The Formula) You could solve this using backtracking or mirroring, but there is a mathematical cheat code that solves it instantly! Find the Size: First, we need to know exactly how many numbers to generate. For an n-bit sequence, there are exactly 2^n numbers. I used a bitwise left shift (1 << n) to calculate this size instantly. The Magic Formula: The i-th number in a standard Gray code sequence can always be found using the exact formula: i ^ (i >> 1). This takes the number, shifts its bits to the right by one, and applies a bitwise XOR against the original number. List Comprehension: I packed this entire logic into a single Python list comprehension that loops from 0 up to our calculated size. It applies the magic formula to every index i, generating the perfect sequence in one go! ✅ Key Insight Bitwise operations are essentially black magic when you know the right formulas. Recognizing that Gray code has a direct integer-to-sequence mapping completely eliminates the need for messy recursive state-tracking. What looks like a complex combinatorial sequence problem is actually just a one-line math trick! ✅ Complexity Time: O(2^n) — We must iterate to generate exactly 2^n elements for the sequence. Space: O(1) — Excluding the space required for the output array, the mathematical generation uses strictly constant auxiliary memory. 🔍 Python solution attached! 🔥 Flexing my coding skills until recruiters notice! #LeetCode365 #BitManipulation #Math #Python #ProblemSolving #DSA #Coding #SoftwareEngineering
To view or add a comment, sign in
-
-
Solved LeetCode 110 – Balanced Binary Tree 🌳 Most people start with a brute-force approach (recomputing heights again and again), which leads to O(n²) in worst cases. Instead, I focused on an optimized bottom-up DFS (post-order traversal). 💡 Key idea: Each node returns its height if balanced Returns -1 as a sentinel if unbalanced Propagates early to avoid unnecessary computation class Solution { public boolean isBalanced(TreeNode root) { return checkHeight(root) != -1; } int checkHeight(TreeNode node){ if (node == null) return 0; int left = checkHeight(node.left); int right = checkHeight(node.right); if (left == -1 || right == -1) return -1; if (Math.abs(left - right) > 1) return -1; return Math.max(left, right) + 1; } } 🚀 Complexity: Time: O(n) Space: O(h) (recursion stack) 📌 What I learned: Combining multiple computations (height + balance) into a single traversal Using sentinel values to simplify recursion Thinking in bottom-up patterns for tree problems This pattern is widely useful in tree-based problems and even shows up in backend systems where hierarchical data needs validation. #Java #DataStructures #Algorithms #LeetCode #CodingInterview #BackendDevelopment
To view or add a comment, sign in
-
-
🚀 LeetCode — 207. Course Schedule Solved | Medium | Graph | Cycle Detection (DFS) 🔗 Solution Link: https://lnkd.in/gNerrUfM At first, this doesn’t look like a graph problem. But once you model prerequisites as edges, it becomes a directed graph: b → a (to take a, you must complete b) 💡 Core Idea The question reduces to: Can we complete all courses? → Equivalent to: Does the graph contain a cycle? If there’s a cycle → impossible to finish all courses. 🧠 Approach (DFS + State Tracking) Initially, I tried applying undirected graph cycle logic — but that doesn’t work here. In directed graphs, we need an extra state: 0 → not visited 1 → visited 2 → in recursion stack (instack) While doing DFS: If we visit a node already in instack, we found a cycle If visited but not in stack → safe After exploring, remove it from stack (backtrack) This “instack” idea is the key difference from undirected graphs. 📈 Complexity Time: O(V + E) Space: O(V) A classic problem that teaches the subtle difference between undirected vs directed cycle detection. #LeetCode #Graph #DFS #CycleDetection #TopologicalSort #DSA #ProblemSolving #CodingJourney
To view or add a comment, sign in
-
-
LeetCode Daily | Day 81 🔥 LeetCode POTD – 2452. Words Within Two Edits of Dictionary (Medium) ✨ 📌 Problem Insight Given two arrays of same-length words: ✔ queries and dictionary ✔ You can change characters (edits) in a query ✔ Match if ≤ 2 edits needed ✔ Return all such queries 🔍 Initial Thinking – Brute Force ⚙️ 💡 Idea: ✔ Compare every query with every dictionary word ✔ Count character differences ⚠️ Concern: ✔ Seems heavy → O(Q × D × n) ✔ But constraints are small → acceptable 💡 Key Observation 🔥 ✔ This is just Hamming Distance ≤ 2 ✔ Early stopping helps → stop once diff > 2 ✔ No need for complex data structures 🚀 Optimized Approach ✔ For each query: → Compare with dictionary words → Count mismatches → If mismatches ≤ 2 → valid 🔧 Core Idea ✔ Character-by-character comparison ✔ Break early if differences exceed 2 ✔ Add query once a match is found ⏱ Complexity ✔ Time: O(Q × D × n) ✔ Space: O(1) 🧠 Key Learning ✔ Not every problem needs optimization tricks ✔ Constraints guide the approach ✔ Early breaking can significantly reduce runtime 🚀 Takeaway A clean implementation problem where recognizing Hamming distance makes everything straightforward ⚡ #LeetCode #DSA #Algorithms #CPlusPlus #ProblemSolving #CodingJourney
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development