Day 9 of 100 Completed Today was about reinforcing data skills while continuing to sharpen interval-based problem solving. • #1094 - Car Pooling (Medium) - solved • Continued Pandas fundamentals 🔎 Focus Areas • Applying prefix sum / difference array concepts on intervals • Understanding capacity constraints over a timeline • Going deeper into data manipulation with Pandas 💡 Key Takeaways (DSA) 📌 #1094 Car Pooling This problem reinforced how powerful range updates can be when handled correctly. Instead of checking every trip naively, the smarter approach: add passengers at pickup remove passengers at drop track running capacity over time The idea is simple, but the impact is huge in terms of efficiency. Starting to see patterns repeat across interval problems, which is a good sign. 🚀 Python + Pandas Continued working with DataFrames and basic operations. Getting more comfortable with how data is stored and manipulated. 💡 Key Takeaways (Python) • Operations on columns are becoming more intuitive • Less reliance on loops, more on built-in functions • Still building speed, but understanding is improving steadily ⚡ Honest Reflection This was a steady day. Not flashy, but important. These are the days where foundations actually get built. I’m starting to recognize patterns faster, especially in interval-based questions. That reduces hesitation and improves confidence. Pandas still needs more practice, but the learning curve feels manageable now. Consistency maintained. Momentum continues. Patterns recognized: Difference Array | Prefix Sum | Interval Scheduling | Capacity Tracking | DataFrames | Column Operations #100DaysOfCode #DSA #Python #Pandas #LeetCode #BuildInPublic #CodingJourney #Consistency
100 Days of Code: Day 9 - Data Skills and Interval Problem Solving
More Relevant Posts
-
Built a Credit Risk Scoring Model using Logistic Regression and the German Credit Dataset. The model predicts whether a customer is a good or bad credit risk based on financial and demographic factors. Skills used: Python, Pandas, Scikit-learn, Data Preprocessing, Logistic Regression, Feature Importance GitHub: https://lnkd.in/guuPQ9_H
To view or add a comment, sign in
-
Day 21 of 100 Completed Today shifted focus toward core data structures while continuing revision - building stronger fundamentals in linked lists. • #206 - Reverse Linked List - solved • Studied Linked List & Doubly Linked List basic operations • Continued revision of previous topics 🔎 Focus Areas • Pointer manipulation and traversal logic • Understanding structure of singly vs doubly linked lists • Strengthening fundamentals through revision 💡 Key Takeaways (DSA) 📌 #206 Reverse Linked List This problem is all about pointer control: keep track of previous, current, next reverse links step by step without losing references clean logic matters more than complexity here 📌 Linked List & Doubly Linked List Basics Singly LL → one-directional traversal Doubly LL → extra back pointer for flexibility operations like insertion, deletion, traversal depend heavily on pointer accuracy Key insight: Linked Lists are simple in theory, but easy to mess up if pointer handling isn’t precise. 🚀 Revision Continued revising earlier topics to strengthen retention. 💡 Key Takeaways • Concepts feel more stable with repetition • Better clarity in choosing approaches • Still improving speed and confidence ⚡ Honest Reflection This was a foundational day. Not flashy, but important. Pointer-based problems require precision, and I’m still building that muscle. Mistakes are happening, which means there’s room to improve. Revision + fundamentals together is a good move right now. Consistency is intact. Base is getting stronger. Patterns recognized: Linked List | Doubly Linked List | Pointer Manipulation | Reversal | Traversal | Fundamentals Reinforcement #100DaysOfCode #DSA #Python #LinkedList #LeetCode #BuildInPublic #CodingJourney #Consistency
To view or add a comment, sign in
-
-
Stock analysis shouldn’t require five different tools. You can now use Quadratic to ask for real market data in plain language, generate Python automatically, and turn prices, fundamentals, financial statements, and technical indicators into charts and analysis inside one spreadsheet. No API keys. No plugins. No external setup. Just open Quadratic and start analyzing. See how it works: https://lnkd.in/eq2hMUFm
To view or add a comment, sign in
-
A small constraint can completely change the algorithm you need. Spent last weekend revisiting an algorithm I find genuinely beautiful: feasibility circulation on flow networks with lower bounds. Take vehicle assignment: If every vehicle only has a maximum capacity, standard max-flow works nicely. But if every vehicle also has a minimum capacity, the problem becomes more interesting. Now the question shifts from simple capacity to overall feasibility. Each assignment must not only fit within limits, but also justify using the vehicle at all. That is where feasibility circulation with lower bounds comes in. The problem setup: Assign exactly T people to a set of vehicles where: • each vehicle has a minimum and maximum capacity • each person has a maximum walking distance to a pickup point • the final assignment must satisfy all constraints The minimum capacity constraint is what makes the problem interesting. Standard max-flow handles upper bounds naturally, but lower bounds need a different modelling approach. The trick is a reduction: Convert each lower bound into a node demand, add a super-source and super-sink, then check whether max-flow can satisfy all demands. - If it can, the original assignment is feasible. - If it cannot, no valid assignment exists. I implemented the solution in Python with test cases and an independent verifier to check correctness. Time complexity: O(S·T + L + R log L) Repo and README walking through the algorithm: https://lnkd.in/gmQXYkbB #algorithms #python #operationsresearch #optimization
To view or add a comment, sign in
-
🚀 Day 74 of #100DaysOfCode 🧩 LeetCode 220 – Contains Duplicate III (Hard) Today’s problem was a solid mix of logic + optimization. Not brute-force friendly at all — you have to think smart. 🔍 Problem Statement: Given an array "nums" and two integers "indexDiff" and "valueDiff", check if there exist two indices "i" and "j" such that: ✔️ "i ≠ j" ✔️ "|i - j| ≤ indexDiff" ✔️ "|nums[i] - nums[j]| ≤ valueDiff" 💡 Approach Used (Bucket + Sliding Window): Instead of comparing every pair (which would be too slow), I used: 👉 Bucketization Technique 👉 Sliding Window Constraint Each number is placed into a bucket of size "valueDiff + 1". - Same bucket ⇒ valid pair - Neighbor buckets ⇒ check manually - Maintain only last "indexDiff" elements ⚡ Why this works: It reduces time complexity from O(n²) → O(n) 📊 My Performance: ⏱️ Runtime: 139 ms 💾 Memory: 37.38 MB 🔥 Key Learning: Efficient problems are less about coding and more about choosing the right data structure. #Day74 #LeetCode #100DaysOfCode #DSA #CodingJourney #Python #ProblemSolving #Consistency
To view or add a comment, sign in
-
-
Day 22 of 100 Completed Today continued with linked list fundamentals and took the first step into actual data analysis. • #876 - Middle of the Linked List (Easy) - solved • Started basics of EDA (Exploratory Data Analysis) 🔎 Focus Areas • Fast and slow pointer technique • Efficient traversal without extra space • Understanding the purpose of EDA in data workflows 💡 Key Takeaways (DSA) 📌 #876 Middle of the Linked List This is a classic pattern: use two pointers (slow and fast) slow moves 1 step, fast moves 2 steps when fast reaches the end, slow is at the middle Clean, efficient, and shows how smart traversal beats brute force. 🚀 Python + EDA Started basic Exploratory Data Analysis. This is where all the libraries finally start connecting. 💡 Key Takeaways (Python) • EDA is about understanding data before doing anything with it • Looking at distributions, missing values, and patterns • Visualization tools now actually have a purpose, not just syntax practice ⚡ Honest Reflection This was a meaningful shift. DSA is continuing steadily, but starting EDA makes things feel more real-world. Still early in EDA, so understanding is basic. Need to go deeper and work with actual datasets. Linked list patterns are becoming more intuitive now, which is a good sign. Consistency is strong. Direction is getting clearer. Patterns recognized: Fast-Slow Pointers | Linked List Traversal | Space Optimization | Data Understanding | EDA Basics #100DaysOfCode #DSA #Python #EDA #LinkedList #LeetCode #BuildInPublic #CodingJourney #Consistency
To view or add a comment, sign in
-
-
7 days. 7 free notebooks. Here's the week 3 finale. p < 0.05 doesn't mean what you think. This notebook teaches stats that actually matter. What it covers: → Null vs alternative hypothesis — framed as business questions → T-tests — comparing means between two groups → Chi-square tests — comparing proportions and categories → P-values — what they actually mean (and what they don't) → Effect size — statistical significance != practical significance → A/B test design — sample size, power analysis, duration → Common pitfalls: peeking, multiple comparisons, Simpson's paradox Every concept is a business scenario with runnable code. Not formulas on a whiteboard. Decisions backed by data. Free: https://lnkd.in/gGnED-7n That's 7 free notebooks this week: 1. Web Scraping with BeautifulSoup 2. Classification: Logistic Regression, Trees & KNN 3. API Masterclass with Authentication 4. Market Basket Analysis 5. Voice of Customer & Text Mining 6. Plotly Interactive Visualization 7. Hypothesis Testing & A/B Tests All free. All runnable. All on topfolio.in. I have 1,098 notebooks on the platform. Follow me for more drops next week. What topic should I share next? #Statistics #HypothesisTesting #ABTesting #Python #DataScience #DataAnalyst #ProductAnalytics #FreeResources
To view or add a comment, sign in
-
I’ve just released a small desktop tool I built for tracking time against fixed project allocations: Tariff Clock. It’s designed for situations where time is effectively “prepaid” or capped — consulting, research support, internal service work — and you want a simple, auditable way to track usage. Key ideas: - Each project has a fixed time “tariff” - Start/stop behaves like a chess clock (only one running at a time) - Live countdown of remaining time - Automatic session logging + manual adjustments (with reasons) - Full CSV audit trail per project It’s intentionally lightweight, local-first, and transparent. Cross-Platform https://lnkd.in/encZ45wW #ProductivityTools #TimeTracking #Consulting #ResearchSupport #DataManagement #OpenSource #Python #MacOS #IndieDev #Workflow
To view or add a comment, sign in
-
-
🚀 LeetCode — 207. Course Schedule Solved | Medium | Graph | Cycle Detection (DFS) 🔗 Solution Link: https://lnkd.in/gNerrUfM At first, this doesn’t look like a graph problem. But once you model prerequisites as edges, it becomes a directed graph: b → a (to take a, you must complete b) 💡 Core Idea The question reduces to: Can we complete all courses? → Equivalent to: Does the graph contain a cycle? If there’s a cycle → impossible to finish all courses. 🧠 Approach (DFS + State Tracking) Initially, I tried applying undirected graph cycle logic — but that doesn’t work here. In directed graphs, we need an extra state: 0 → not visited 1 → visited 2 → in recursion stack (instack) While doing DFS: If we visit a node already in instack, we found a cycle If visited but not in stack → safe After exploring, remove it from stack (backtrack) This “instack” idea is the key difference from undirected graphs. 📈 Complexity Time: O(V + E) Space: O(V) A classic problem that teaches the subtle difference between undirected vs directed cycle detection. #LeetCode #Graph #DFS #CycleDetection #TopologicalSort #DSA #ProblemSolving #CodingJourney
To view or add a comment, sign in
-
-
You know what hits hard? When your dashboard is slow, your code is clunky, and your insights are buried under noise. 😤 I stopped relying on outdated libraries and rolled into a pure SQL hack that cut my prep time by half. ⚠️ The mistake? Assuming pre-built magic. The real win? A simple pipeline I built with Python + pandas, automating reports and letting LLMs summarize faster. 📈 What changed? Clarity in workflow + speed in output — now I can focus on what matters, not what’s complicated. If you’re chasing more in the data game, this is your nudge. What lesson are you learning from your data stack? 🚀
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development