🚀 DSA Challenge – Day 94 Problem: Design LRU (Least Recently Used) Cache ⚡📦 This was one of those classic data structure design problems that truly tests your understanding of hash maps and linked lists working together seamlessly for O(1) performance! 🧠 Problem Summary: Implement a data structure that behaves like an LRU Cache, supporting: ✅ get(key) → Retrieve value in O(1). ✅ put(key, value) → Insert/update value in O(1). ✅ Automatically evict the least recently used key when capacity is exceeded. ⚙️ My Approach: To achieve O(1) operations, I combined two powerful structures: 1️⃣ HashMap (keyMap) → For constant-time key lookups. 2️⃣ Doubly Linked List → To maintain the order of usage (most recent at the front). 🔹 When a key is accessed or updated: Move it to the front (most recent). 🔹 When inserting a new key: If full → remove the least recently used node (from the tail). Insert the new key-value pair at the front. The linked list allows O(1) addition/removal, and the hashmap ensures O(1) lookup and update. 📈 Complexity: Time: O(1) → For both get and put. Space: O(capacity) → For the hashmap and list nodes. ✨ Key Takeaway: This problem elegantly demonstrates how data structures complement each other — the linked list maintains order, and the hashmap ensures constant-time access. A perfect synergy of logic and structure! ⚙️💡 🔖 #DSA #100DaysOfCode #LeetCode #ProblemSolving #LRUCache #DataStructures #HashMap #LinkedList #Python #Algorithms #SystemDesign #CodingChallenge #EfficientCode #InterviewPrep #TechCommunity #CodeEveryday #LearningByBuilding
"Designing LRU Cache with HashMap and LinkedList"
More Relevant Posts
-
🌳 Day 9 of 30 – LeetCode Challenge: Balanced Binary Tree ✅⚖️ Today’s challenge was about determining whether a Binary Tree is height-balanced — a key concept in tree data structures and AVL trees. A binary tree is considered balanced if the height difference between the left and right subtree of every node is no more than 1. 🧩 Problem Summary Given the root of a binary tree, return True if it’s height-balanced, else return False. Example 1: Input: [3,9,20,null,null,15,7] Output: True Example 2: Input: [1,2,2,3,3,null,null,4,4] Output: False Example 3: Input: [] Output: True 💡 Approach – DFS + Height Check We use recursion to check the height of each subtree. At each node: ✅ If left & right subtrees are balanced and ✅ Height difference ≤ 1 → the node is balanced We return -1 as a signal whenever an imbalance is found to stop further unnecessary checks. ⚙️ Complexity Time O(n) — visiting each node once Space O(h) — recursion stack (h = tree height) 🏆 Result & Learnings ✅ Efficient check using post-order traversal 🧠 Learned how to compute height + balance in one recursion ⚡ Avoided repeated height calculations → better performance ✨ Where Is This Used? 🌿 Self-balancing trees (AVL, Red-Black Trees) ⚙️ Efficient searching & insertion in ordered data 📊 Real-time systems where balanced height ensures fast access #Day9 #LeetCode #BinaryTree #DSA #Python #CodingChallenge #30DaysOfCode #ProblemSolving #WomenWhoCode #MTech #Trees
To view or add a comment, sign in
-
-
🚀 DSA Challenge – Day 93 Problem: Find Median from Data Stream 📊⚙️ This was an exciting deep dive into data structures and real-time computation — maintaining the median efficiently while continuously adding numbers from a stream. 🧠 Problem Summary: You need to design a class MedianFinder that can: ✅ Add numbers dynamically from a data stream. ✅ Return the median at any point in time. If the total number of elements is even → median = mean of the two middle values. If odd → median = middle value. ⚙️ My Approach: 1️⃣ Use two heaps to maintain balance: A max heap (maxHeap) for the smaller half of numbers. A min heap (minHeap) for the larger half. 2️⃣ Whenever a new number arrives: Push it into the maxHeap (inverted to simulate max behavior). Balance both heaps so that their size difference is never more than 1. 3️⃣ Ensure heap order: the maximum in maxHeap ≤ minimum in minHeap. 4️⃣ The median is: The top of the larger heap (if odd count). The average of both tops (if even count). 📈 Complexity: Time: O(log n) → For insertion and heap balancing. Space: O(n) → To store all elements in heaps. ✨ Key Takeaway: This problem highlights how heaps can turn complex real-time median calculations into a smooth, efficient process — a great example of data structure synergy in action. ⚡ 🔖 #DSA #100DaysOfCode #LeetCode #ProblemSolving #Heaps #PriorityQueue #DataStructures #Algorithms #Median #Python #CodingChallenge #InterviewPrep #EfficientCode #DynamicProgramming #TechCommunity #LearningByBuilding #CodeEveryday
To view or add a comment, sign in
-
-
Data Structure and Algorithm: Array👩🏾💻 I’ve been using arrays for a while, but now I’m actually starting to understand how they work in memory and how their time complexity really makes sense. An array isn’t just a bunch of items stored randomly. It’s actually a continuous block of memory where all the elements sit side by side. Because of that, the computer already knows exactly where each element is stored, which is why accessing elements is really fast. For example, if you want to get the 5th element, the computer doesn’t need to go through everything one by one. It just calculates the exact position using the memory address. That’s why accessing an element is O(1) which means constant time. But inserting or deleting something in between is slower O(n) because other elements may need to shift. There are mainly two types of arrays 1. One dimensional array 2. Multi dimensional array A one dimensional array is like a straight line of elements. Think of it as a simple list like [10, 20, 30, 40]. Each element has an index 0, 1, 2, 3 which makes accessing any element easy and fast. A multi dimensional array on the other hand has more than one level like a table 2D or a cube 3D. A two dimensional array feels like rows and columns in a spreadsheet. A three dimensional array is like stacking multiple tables on top of each other, imagine a cube of data. One thing that really stood out to me is that arrays are static in size which means once you create them, you can’t easily change their size. This is also why Python lists are more flexible, they’re built on top of arrays but can grow or shrink dynamically. Understanding how time and space complexity works made me realize how powerful arrays actually are Accessing an element → O(1) Searching → O(n) Insertion or Deletion → O(n) Traversing all elements → O(n) I attached an image of examples of the different types of array below That's all for now, bye ☺️❤️ #TechJourney #PythonLearning #TechCommunity #Array #DataStructure #DSA #Python #Programming #Algorithm
To view or add a comment, sign in
-
-
Accessing high-quality data just got easier! KDnuggets' latest article explores how data professionals can now leverage a new Python API client to seamlessly interact with Data Commons—a comprehensive knowledge graph that aggregates open data from reliable sources. This streamlined access empowers analysts and data scientists to fetch, explore, and analyze rich datasets without the typical integration headaches. A must-read for anyone looking to enhance their data workflows with structured, ready-to-use information! Check out the full article here: https://lnkd.in/dDcFpD4i #DataScience #MachineLearning #Analytics #DataVisualization
To view or add a comment, sign in
-
Just came across Vortex - a new columnar format that’s taking some interesting shots at improving on Parquet. The main difference? It keeps data compressed while you query it, rather than decompressing first. Also supports cascade compression (multiple encoding layers) and has zero-copy interop with Arrow. Still early, but could be interesting for anyone dealing with large analytical workloads. Built in Rust with Python bindings. Worth a look: https://docs.vortex.dev #DataEngineering #OpenSource
To view or add a comment, sign in
-
Most people save pandas DataFrames as .csv or .xlsx. But here’s the truth — those formats are dumb storage. They don’t remember types, indexes, or even NaNs properly. That’s where Pickle (.pkl) comes in. Pickle isn’t just a file — it’s a snapshot of your Python object. When you load it back, you get the exact same DataFrame, including: • Column dtypes • Index • Categoricals • Complex objects Now, wrap that .pkl in gzip compression and you get .pkl.gz: df.to_pickle("data.pkl.gz", compression="gzip") ✅ Exact data recovery (no conversions) ✅ 70–90% smaller than .pkl ✅ 10–15x faster load than .csv ✅ Perfect for caching models, merged data, or preprocessed outputs In short — Pickle saves structure. Gzip saves space. Together, .pkl.gz saves your time.
To view or add a comment, sign in
-
🚀 DSA Progress – Day 104 ✅ Problem #49: Group Anagrams 🧠 Difficulty: Medium | Topics: Hash Table, String, Sorting, Frequency Count 🔍 Approach: Implemented a hashing + frequency count method to group all words that are anagrams of each other. Step 1 (Character Frequency): For every string in the input list, create a fixed-size array of 26 integers representing the frequency of each lowercase letter (a–z). Step 2 (Canonical Representation): Convert this frequency array into a canonical string that acts as a unique key for all words sharing the same letter composition. Example: "eat", "tea", and "ate" all map to "aet". Step 3 (Hash Map Grouping): Use a dictionary (defaultdict(list)) where: Key → the canonical string (like "aet") Value → list of words that match this signature. Step 4 (Result Construction): Collect all the dictionary values and return them as the grouped anagrams. 🕒 Time Complexity: O(n × k) n = number of words k = average length of each word (to count characters). 💾 Space Complexity: O(n × k) For storing all words and their frequency signatures. 📁 File: https://lnkd.in/g2MkYXJg 📚 Repo: https://lnkd.in/g8Cn-EwH 💡 Learned: This problem reinforced the power of hashing and canonical representation for grouping similar items. Instead of sorting each string (which costs O(k log k)), using a frequency count made the grouping faster and cleaner. It deepened my understanding of how to map variable-length strings into fixed-size signatures — a common trick in string hashing problems. ✅ Day 104 complete — grouped scattered words into perfect anagram families using hash magic! 🔤✨📚 #LeetCode #DSA #Python #HashMap #StringManipulation #Anagrams #FrequencyCount #100DaysOfCode #DailyCoding #InterviewPrep #GitHubJourney
To view or add a comment, sign in
-
Learning Data Structure And Algorithm: Tuple 👩🏾💻 Tuples are very similar to lists, but the big difference is that tuples are immutable. Once you create them, you can’t change, add, or remove any element, unlike lists. This makes them faster and more memory-efficient, especially when you want to store data that shouldn’t change. Tuples are also hashable, which simply means Python can give them a unique identity. This allows them to be used as keys in dictionaries or stored in sets. Lists can’t do that because they can be modified. Time and space complexity of Tuples: Time Complexity: Accessing elements → O(1) Searching → O(n) Space Complexity: O(n), depending on the size of data stored I added more information about tuples in the image/code below That’s all for now, bye ☺️❤️ #Day6 #58DaysOfTech #PythonLearning #DSA #Tuple #TechJourney #LearnTogether #TechCommunity
To view or add a comment, sign in
-
-
💡 𝗟𝗲𝗲𝘁𝗖𝗼𝗱𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗟𝗼𝗴 — 𝗩𝗮𝗹𝗶𝗱 𝗣𝗮𝗹𝗶𝗻𝗱𝗿𝗼𝗺𝗲 𝗜𝗜 🔗 Problem: https://lnkd.in/gzNVf7yx Today’s challenge was simple to read, tricky to solve — 👉 “𝘊𝘢𝘯 𝘢 𝘴𝘵𝘳𝘪𝘯𝘨 𝘣𝘦𝘤𝘰𝘮𝘦 𝘢 𝘱𝘢𝘭𝘪𝘯𝘥𝘳𝘰𝘮𝘦 𝘪𝘧 𝘺𝘰𝘶 𝘳𝘦𝘮𝘰𝘷𝘦 𝘢𝘵 𝘮𝘰𝘴𝘵 𝘰𝘯𝘦 𝘤𝘩𝘢𝘳𝘢𝘤𝘵𝘦𝘳?” 1️⃣ 𝗕𝗿𝘂𝘁𝗲 𝗙𝗼𝗿𝗰𝗲 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 • Tried removing each character one by one and checked if the remaining string is a palindrome. • Two nested loops — one for each deletion, one for palindrome check. • 𝗧𝗶𝗺𝗲 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆: O(n²) 𝗦𝗽𝗮𝗰𝗲: O(1) • Works, but inefficient for long strings. 2️⃣ 𝗢𝗽𝘁𝗶𝗺𝗮𝗹 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 — 𝗧𝘄𝗼 𝗣𝗼𝗶𝗻𝘁𝗲𝗿 𝗧𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲 • Used two pointers from both ends to compare characters. • On mismatch, tried skipping either the left or right character — and checked if it still forms a palindrome. • 𝗧𝗶𝗺𝗲 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆: O(n) 𝗦𝗽𝗮𝗰𝗲: O(1) • Elegant and efficient — minimal checks, no extra space. ✅ 𝗥𝗲𝘀𝘂𝗹𝘁: • All test cases passed successfully. ⏱ 𝗥𝘂𝗻𝘁𝗶𝗺𝗲: 75ms 💾 𝗠𝗲𝗺𝗼𝗿𝘆: 12.39 MB 🔑 Key Learnings: • Two-pointer approach isn’t just for arrays — it’s powerful for string problems too. • Even in data engineering, this logic helps in 𝘁𝗲𝘅𝘁 𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻, 𝗱𝗮𝘁𝗮 𝗰𝗹𝗲𝗮𝗻𝗶𝗻𝗴, and 𝗲𝗿𝗿𝗼𝗿-𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝘁 𝗰𝗼𝗺𝗽𝗮𝗿𝗶𝘀𝗼𝗻𝘀. • Data engineers can use similar logic while validating data pipelines or cleansing text-based data efficiently. #LeetCode #DSA #ProblemSolving #Python #TwoPointers #CodingJourney #DataEngineer #100DaysOfCode #Learning
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development