Day 21 of 100 Completed Today shifted focus toward core data structures while continuing revision - building stronger fundamentals in linked lists. • #206 - Reverse Linked List - solved • Studied Linked List & Doubly Linked List basic operations • Continued revision of previous topics 🔎 Focus Areas • Pointer manipulation and traversal logic • Understanding structure of singly vs doubly linked lists • Strengthening fundamentals through revision 💡 Key Takeaways (DSA) 📌 #206 Reverse Linked List This problem is all about pointer control: keep track of previous, current, next reverse links step by step without losing references clean logic matters more than complexity here 📌 Linked List & Doubly Linked List Basics Singly LL → one-directional traversal Doubly LL → extra back pointer for flexibility operations like insertion, deletion, traversal depend heavily on pointer accuracy Key insight: Linked Lists are simple in theory, but easy to mess up if pointer handling isn’t precise. 🚀 Revision Continued revising earlier topics to strengthen retention. 💡 Key Takeaways • Concepts feel more stable with repetition • Better clarity in choosing approaches • Still improving speed and confidence ⚡ Honest Reflection This was a foundational day. Not flashy, but important. Pointer-based problems require precision, and I’m still building that muscle. Mistakes are happening, which means there’s room to improve. Revision + fundamentals together is a good move right now. Consistency is intact. Base is getting stronger. Patterns recognized: Linked List | Doubly Linked List | Pointer Manipulation | Reversal | Traversal | Fundamentals Reinforcement #100DaysOfCode #DSA #Python #LinkedList #LeetCode #BuildInPublic #CodingJourney #Consistency
Linked List Fundamentals and Pointer Manipulation
More Relevant Posts
-
This week I stopped just solving problems and started actually understanding my tools. the thing nobody tells you early on: you can know the logic perfectly and still write terrible code because you're reinventing what already exists. that was me. so this week was all about STL C++ Standard Template Library. what is STL and why does it matter? STL is a collection of ready made data structures and algorithms built into C++. instead of manually building a hashmap or a dynamic array from scratch, you use what's already optimized and battle-tested. map, unordered_map, vector, stack, queue, set these aren't just containers. knowing which one to use and when is the difference between a clean O(n) solution and a messy O(n²) one. in a real interview, you don't have time to build from scratch. you need to know your tools. what I actually worked on this week: → map vs unordered_map ordered vs O(1) lookup tradeoffs → adjacency list using map<int, vector<int>> → prefix sum pattern → combining hashmap + modular arithmetic together problems solved: → LRU Cache (medium) finally understood how to combine hashmap + doubly linked list → Sum of Distances (medium) → Make Sum Divisible by P (medium) → Minimum Operations to Make Array Sum Divisible by K stuck on: LFU Cache LRU felt hard until it clicked. LFU is a whole different beast. still working on it. the honest part: "Make Sum Divisible by P" took me 2.5+ hours. I got TLE, then WA, fixed both, and finally understood why the solution works. slow? yes. but I didn't copy a solution I earned it. my LeetCode if you want to see the journey: https://lnkd.in/ghKx4CgM now a genuine question for the experienced folks here when you were learning DSA, how did you balance depth vs speed? spending 2-3 hours on one problem to fully understand it, or timebox it and move on? would love brutal honest takes. drop it in the comments 👇 #LeetCode #DSA #CPP #STL #LearningInPublic #BackendDevelopment #SoftwareEngineering #100DaysOfCode
To view or add a comment, sign in
-
Day 22 of 100 Completed Today continued with linked list fundamentals and took the first step into actual data analysis. • #876 - Middle of the Linked List (Easy) - solved • Started basics of EDA (Exploratory Data Analysis) 🔎 Focus Areas • Fast and slow pointer technique • Efficient traversal without extra space • Understanding the purpose of EDA in data workflows 💡 Key Takeaways (DSA) 📌 #876 Middle of the Linked List This is a classic pattern: use two pointers (slow and fast) slow moves 1 step, fast moves 2 steps when fast reaches the end, slow is at the middle Clean, efficient, and shows how smart traversal beats brute force. 🚀 Python + EDA Started basic Exploratory Data Analysis. This is where all the libraries finally start connecting. 💡 Key Takeaways (Python) • EDA is about understanding data before doing anything with it • Looking at distributions, missing values, and patterns • Visualization tools now actually have a purpose, not just syntax practice ⚡ Honest Reflection This was a meaningful shift. DSA is continuing steadily, but starting EDA makes things feel more real-world. Still early in EDA, so understanding is basic. Need to go deeper and work with actual datasets. Linked list patterns are becoming more intuitive now, which is a good sign. Consistency is strong. Direction is getting clearer. Patterns recognized: Fast-Slow Pointers | Linked List Traversal | Space Optimization | Data Understanding | EDA Basics #100DaysOfCode #DSA #Python #EDA #LinkedList #LeetCode #BuildInPublic #CodingJourney #Consistency
To view or add a comment, sign in
-
-
Day 9 of 100 Completed Today was about reinforcing data skills while continuing to sharpen interval-based problem solving. • #1094 - Car Pooling (Medium) - solved • Continued Pandas fundamentals 🔎 Focus Areas • Applying prefix sum / difference array concepts on intervals • Understanding capacity constraints over a timeline • Going deeper into data manipulation with Pandas 💡 Key Takeaways (DSA) 📌 #1094 Car Pooling This problem reinforced how powerful range updates can be when handled correctly. Instead of checking every trip naively, the smarter approach: add passengers at pickup remove passengers at drop track running capacity over time The idea is simple, but the impact is huge in terms of efficiency. Starting to see patterns repeat across interval problems, which is a good sign. 🚀 Python + Pandas Continued working with DataFrames and basic operations. Getting more comfortable with how data is stored and manipulated. 💡 Key Takeaways (Python) • Operations on columns are becoming more intuitive • Less reliance on loops, more on built-in functions • Still building speed, but understanding is improving steadily ⚡ Honest Reflection This was a steady day. Not flashy, but important. These are the days where foundations actually get built. I’m starting to recognize patterns faster, especially in interval-based questions. That reduces hesitation and improves confidence. Pandas still needs more practice, but the learning curve feels manageable now. Consistency maintained. Momentum continues. Patterns recognized: Difference Array | Prefix Sum | Interval Scheduling | Capacity Tracking | DataFrames | Column Operations #100DaysOfCode #DSA #Python #Pandas #LeetCode #BuildInPublic #CodingJourney #Consistency
To view or add a comment, sign in
-
-
𝗗𝗮𝘆 𝟲𝟲/𝟳𝟱 | 𝗟𝗲𝗲𝘁𝗖𝗼𝗱𝗲 𝟳𝟱 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: 714. Best Time to Buy and Sell Stock with Transaction Fee 𝗗𝗶𝗳𝗳𝗶𝗰𝘂𝗹𝘁𝘆: Medium 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗦𝘂𝗺𝗺𝗮𝗿𝘆: Given an array prices where prices[i] represents the stock price on day i, and a transaction fee, find the maximum profit you can achieve. Constraints: • You can make multiple transactions • You must sell before buying again • Each transaction incurs a fixed fee 𝗠𝘆 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵: This problem is solved using Dynamic Programming with state optimization. Instead of maintaining a full DP table, we track two states: • buy → Maximum profit when holding a stock • sell → Maximum profit when not holding a stock • Initialization: – buy = -∞ (we haven’t bought yet) – sell = 0 • Transition for each price: – buy = max(buy, sell - price) (Either keep holding or buy today) – sell = max(sell, buy + price - fee) (Either keep not holding or sell today after paying fee) • Final answer: sell This works because at every step, we decide whether to take an action (buy/sell) or skip, while always keeping track of the best possible profit. 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀: • Time Complexity: O(n) • Space Complexity: O(1) 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: Stock problems often reduce to state machines. Tracking “holding” vs “not holding” states and optimizing transitions can simplify even complex trading constraints like transaction fees. 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗟𝗶𝗻𝗸: https://lnkd.in/gz6hgkXw #Day66of75 #LeetCode75 #DSA #Java #Python #DynamicProgramming #Greedy #MachineLearning #DataScience #ML #DataAnalyst #LearningInPublic #TechJourney #LeetCode
To view or add a comment, sign in
-
-
I built an MCP server that roasts your pull requests You know that PR you shipped on Friday at 5pm with the description "misc fixes"? Yeah, this tool has opinions about that. pr-roast-mcp is an MCP server that reads any GitHub PR - the diff, the stats, the description (or lack thereof) - and delivers a brutally honest code review. With a severity rating from 🔥 to 🔥🔥🔥🔥🔥. ▎ "Your tests are thorough. Like, suspiciously thorough. 156 lines for a POST endpoint? ▎ You're basically writing a dissertation on HTTP status codes." ▎ ▎ "849 lines added, 7 removed. That's 121:1 ratio. For a 'bonus feature,' this ▎ sprawls." ▎ It's always technically accurate though. Every roast points at real issues -naming, complexity, missing edge cases, over-engineering. It just delivers the feedback the way your most senior engineer would after their third coffee. It always ends with one genuine compliment. Mine was about rounding edge cases in bonus calculations. Small wins. Two tools, ~150 lines of Python: - roast_pr - point it at any PR number or URL - roast_my_prs - lists your PRs so you can pick a victim Uses gh CLI to fetch the diff, Claude Haiku for the roast. Setup is one line. We've been using it in our team Slack before merges. Morale has either improved or collapsed, depending on who you ask. Code: https://lnkd.in/gHcZFTqB #buildInPublic #AI #claude #haiku #MCP #Python #DevTools #CodeReview #OpenSource
To view or add a comment, sign in
-
-
🚀 Day 344 of solving 365 medium questions on LeetCode! 🔥 Today’s challenge: “89. Gray Code” ✅ Problem: You are given an integer n. Your goal is to generate an n-bit Gray code sequence, which is an array of 2^n integers where every adjacent pair of numbers (including the first and last numbers) differs by exactly one single bit in their binary representation. ✅ Approach (Bit Manipulation / The Formula) You could solve this using backtracking or mirroring, but there is a mathematical cheat code that solves it instantly! Find the Size: First, we need to know exactly how many numbers to generate. For an n-bit sequence, there are exactly 2^n numbers. I used a bitwise left shift (1 << n) to calculate this size instantly. The Magic Formula: The i-th number in a standard Gray code sequence can always be found using the exact formula: i ^ (i >> 1). This takes the number, shifts its bits to the right by one, and applies a bitwise XOR against the original number. List Comprehension: I packed this entire logic into a single Python list comprehension that loops from 0 up to our calculated size. It applies the magic formula to every index i, generating the perfect sequence in one go! ✅ Key Insight Bitwise operations are essentially black magic when you know the right formulas. Recognizing that Gray code has a direct integer-to-sequence mapping completely eliminates the need for messy recursive state-tracking. What looks like a complex combinatorial sequence problem is actually just a one-line math trick! ✅ Complexity Time: O(2^n) — We must iterate to generate exactly 2^n elements for the sequence. Space: O(1) — Excluding the space required for the output array, the mathematical generation uses strictly constant auxiliary memory. 🔍 Python solution attached! 🔥 Flexing my coding skills until recruiters notice! #LeetCode365 #BitManipulation #Math #Python #ProblemSolving #DSA #Coding #SoftwareEngineering
To view or add a comment, sign in
-
-
LeetCode Daily | Day 78 🔥 LeetCode POTD – 3488. Closest Equal Element Queries (Medium) ✨ 📌 Problem Insight Given a circular array: ✔ For each query index, find nearest same value ✔ Distance is circular ✔ Return minimum distance ✔ If no same value exists → return -1 🔍 Initial Thinking – Brute Force ⚙️ 💡 Idea: ✔ For each query, scan entire array ✔ Check all indices with same value ⚠️ Problem: ✔ O(n) per query → too slow ✔ Total becomes O(n²) in worst case 💡 Key Observation 🔥 ✔ Same values repeat → group their indices ✔ Nearest answer lies among adjacent indices in that group ✔ Circular distance: → min(|i - j|, n - |i - j|) 🚀 Optimized Approach ✔ Store indices for each value (hash map) ✔ For each query: → Use binary search to find position → Check nearest left & right indices ✔ Handle circular wrap using modulo 🔧 Core Idea ✔ Reduce search space using grouping ✔ Use binary search for nearest neighbors ✔ Apply circular distance formula ⏱ Complexity ✔ Time: O(n + q log n) ✔ Space: O(n) 🧠 Key Learning ✔ Nearest element problems → check neighbors, not all ✔ Preprocessing (grouping) can drastically optimize queries ✔ Circular arrays often need wrap-around handling 🚀 Takeaway A great mix of hashing + binary search + circular logic — classic interview pattern to reduce brute force into efficient queries ⚡ #LeetCode #DSA #Algorithms #CPlusPlus #ProblemSolving #CodingJourney
To view or add a comment, sign in
-
-
Shallow Copy vs Deep Copy — The 2 AM Bug Trap 🛑 Most developers think they understand copying objects, until their original data mysteriously changes. That’s not a bug, that’s memory behavior biting you. → Shallow Copy Creates a new container, but nested objects are still shared (by reference) 👉 Change nested data → both copies change. Best for: Flat, simple data. → Deep Copy Creates a completely independent clone, everything is copied recursively. 👉 Change anything → original stays untouched Best for: Complex, nested structures. 💡 Rule of Thumb Shallow → when you only need a surface-level copy Deep → when you need true isolation ⚠️ The real trap: Most bugs aren’t syntax errors. They come from not understanding how data behaves in memory. If you’ve ever spent hours debugging only to realize it was a shallow copy issue. Welcome to the club 😄 #Python #Python3 #Programming #SoftwareEngineering #CleanCode #Debugging #TechTips #PythonDeveloper #BackendDevelopment
To view or add a comment, sign in
-
-
LeetCode Daily | Day 81 🔥 LeetCode POTD – 2452. Words Within Two Edits of Dictionary (Medium) ✨ 📌 Problem Insight Given two arrays of same-length words: ✔ queries and dictionary ✔ You can change characters (edits) in a query ✔ Match if ≤ 2 edits needed ✔ Return all such queries 🔍 Initial Thinking – Brute Force ⚙️ 💡 Idea: ✔ Compare every query with every dictionary word ✔ Count character differences ⚠️ Concern: ✔ Seems heavy → O(Q × D × n) ✔ But constraints are small → acceptable 💡 Key Observation 🔥 ✔ This is just Hamming Distance ≤ 2 ✔ Early stopping helps → stop once diff > 2 ✔ No need for complex data structures 🚀 Optimized Approach ✔ For each query: → Compare with dictionary words → Count mismatches → If mismatches ≤ 2 → valid 🔧 Core Idea ✔ Character-by-character comparison ✔ Break early if differences exceed 2 ✔ Add query once a match is found ⏱ Complexity ✔ Time: O(Q × D × n) ✔ Space: O(1) 🧠 Key Learning ✔ Not every problem needs optimization tricks ✔ Constraints guide the approach ✔ Early breaking can significantly reduce runtime 🚀 Takeaway A clean implementation problem where recognizing Hamming distance makes everything straightforward ⚡ #LeetCode #DSA #Algorithms #CPlusPlus #ProblemSolving #CodingJourney
To view or add a comment, sign in
-
-
🐍 Day 11 of 30 — My Monday morning report used to take 2 hours. Python now runs it in 4 seconds. The honest story — bugs included. I'd been rebuilding the same pivot table every single Monday for months. I got tired of it. So I decided to automate it with Python. Zero experience. Just YouTube, documentation, and stubbornness. Here's what the script does: 1. Reads the weekly claims CSV file automatically 2. Filters rows where status = "denied" 3. Groups by denial_code + payer_name 4. Calculates total count and revenue at risk 5. Sorts by revenue descending — highest risk first 6. Outputs a formatted Excel report 7. Emails it to my manager automatically Here's the honest version history: Version 1: 3 bugs. Ran nothing correctly. Spent a full Saturday debugging. Version 2: Worked — but the output was completely unformatted. Final version: Runs every Monday at 7am. 4 seconds. Professional output. Zero effort. The weekend I spent building it? Has saved me 8+ hours every single month ever since. The best investment of time is always the thing that eliminates itself. Build it once. Let it run forever. That's automation. In a billing office. On real data. Tomorrow: That same script found a billing trend my team had missed for 4 straight months. 👇 #Python #Automation #HealthcareData #LearningInPublic #DataAnalysis #Day11of30
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
🥳🥳