Day 7/30 🔹 Problem: Split expenses among friends (equal & custom split) 🔹 What I focused on today: Handling multiple scenarios based on user choice 🔹 My Thinking Process: Take total expense and number of people Ask user how they want to split (equal or custom) If equal → divide total by number of people If custom → take individual contributions 👉 Same problem, different approaches based on user need 🔹 Inputs I used: Total expense Number of people Choice (equal/custom) Individual amounts (for custom split) 🔹 Code: total = float(input("Enter total expense: ")) people = int(input("Enter number of people: ")) choice = input("Enter '1' for equal split or '2' for custom split: ") # Equal Split if choice == "1": share = total / people print("Each person should pay:", share) # Custom Split elif choice == "2": sum_amount = 0 for i in range(people): amount = float(input("Enter amount paid: ")) sum_amount = sum_amount + amount if sum_amount == total: print("Expenses match the total.") else: print("Amounts do not match total.") else: print("Invalid choice") 🔹 Example: Total = 1000, People = 4 Equal → Each pays 250 🔹 Key Takeaway: Real-world problems often need flexible logic to handle different scenarios, not just one fixed solution #Day7 #Python #30DaysOfCode #LearningInPublic #DataAnalytics #ProblemSolving
Nithya S’ Post
More Relevant Posts
-
🐍 Day 11 of 30 — My Monday morning report used to take 2 hours. Python now runs it in 4 seconds. The honest story — bugs included. I'd been rebuilding the same pivot table every single Monday for months. I got tired of it. So I decided to automate it with Python. Zero experience. Just YouTube, documentation, and stubbornness. Here's what the script does: 1. Reads the weekly claims CSV file automatically 2. Filters rows where status = "denied" 3. Groups by denial_code + payer_name 4. Calculates total count and revenue at risk 5. Sorts by revenue descending — highest risk first 6. Outputs a formatted Excel report 7. Emails it to my manager automatically Here's the honest version history: Version 1: 3 bugs. Ran nothing correctly. Spent a full Saturday debugging. Version 2: Worked — but the output was completely unformatted. Final version: Runs every Monday at 7am. 4 seconds. Professional output. Zero effort. The weekend I spent building it? Has saved me 8+ hours every single month ever since. The best investment of time is always the thing that eliminates itself. Build it once. Let it run forever. That's automation. In a billing office. On real data. Tomorrow: That same script found a billing trend my team had missed for 4 straight months. 👇 #Python #Automation #HealthcareData #LearningInPublic #DataAnalysis #Day11of30
To view or add a comment, sign in
-
-
Day 11/30 🔹 Problem: Build a calculator using functions 🔹 What I focused on today: Breaking a problem into small reusable functions and controlling flow using a menu 🔹 My Thinking Process: Create separate functions for each operation (add, subtract, multiply, divide) Show a menu to the user Take user choice Call the corresponding function 👉 Functions + menu = clean and organized program 🔹 Inputs I used: Two numbers Operation choice 🔹 Code: def add(a, b): return a + b def subtract(a, b): return a - b def multiply(a, b): return a * b def divide(a, b): if b == 0: return "Cannot divide by zero" return a / b print("1. Add") print("2. Subtract") print("3. Multiply") print("4. Divide") choice = input("Enter your choice: ") num1 = float(input("Enter first number: ")) num2 = float(input("Enter second number: ")) if choice == "1": print("Result:", add(num1, num2)) elif choice == "2": print("Result:", subtract(num1, num2)) elif choice == "3": print("Result:", multiply(num1, num2)) elif choice == "4": print("Result:", divide(num1, num2)) else: print("Invalid choice") 🔹 Example: Choice = 1, Numbers = 10 and 5 Output → 15 🔹 Key Takeaway: Using functions makes code modular, reusable, and easier to manage, especially when multiple operations are involved #Day11 #Python #30DaysOfCode #LearningInPublic #DataAnalytics #ProblemSolving
To view or add a comment, sign in
-
🚀 Day 344 of solving 365 medium questions on LeetCode! 🔥 Today’s challenge: “89. Gray Code” ✅ Problem: You are given an integer n. Your goal is to generate an n-bit Gray code sequence, which is an array of 2^n integers where every adjacent pair of numbers (including the first and last numbers) differs by exactly one single bit in their binary representation. ✅ Approach (Bit Manipulation / The Formula) You could solve this using backtracking or mirroring, but there is a mathematical cheat code that solves it instantly! Find the Size: First, we need to know exactly how many numbers to generate. For an n-bit sequence, there are exactly 2^n numbers. I used a bitwise left shift (1 << n) to calculate this size instantly. The Magic Formula: The i-th number in a standard Gray code sequence can always be found using the exact formula: i ^ (i >> 1). This takes the number, shifts its bits to the right by one, and applies a bitwise XOR against the original number. List Comprehension: I packed this entire logic into a single Python list comprehension that loops from 0 up to our calculated size. It applies the magic formula to every index i, generating the perfect sequence in one go! ✅ Key Insight Bitwise operations are essentially black magic when you know the right formulas. Recognizing that Gray code has a direct integer-to-sequence mapping completely eliminates the need for messy recursive state-tracking. What looks like a complex combinatorial sequence problem is actually just a one-line math trick! ✅ Complexity Time: O(2^n) — We must iterate to generate exactly 2^n elements for the sequence. Space: O(1) — Excluding the space required for the output array, the mathematical generation uses strictly constant auxiliary memory. 🔍 Python solution attached! 🔥 Flexing my coding skills until recruiters notice! #LeetCode365 #BitManipulation #Math #Python #ProblemSolving #DSA #Coding #SoftwareEngineering
To view or add a comment, sign in
-
-
🔷A simple train test split is not always enough. I learned this the hard way when my model looked great on paper and struggled on real data. 📌Here is what nobody tells you about splitting data properly. The basic split gives you two sets. Training and testing. That works for simple projects. But what if you need to tune your model? You test different settings, pick the best one, and evaluate on the test set. The problem is that you have now indirectly used the test set to make decisions. It is no longer a fair judge. This is where a three way split becomes important. 🔹X_train, X_temp, y_train, y_temp = train_test_split( X, y, test_size=0.3, random_state=42 ) 🔹X_val, X_test, y_val, y_test = train_test_split( X_temp, y_temp, test_size=0.5, random_state=42 ) Now you have three sets. Training set. The model learns here. 70 percent of your data. Validation set. You tune and compare models here. 15 percent. Test set. You evaluate the final model here. Once. Never again. 15 percent. The test set is sacred. You look at it exactly one time at the very end. One more thing that most people miss. Always stratify your split when your target column is imbalanced. 🔹train_test_split(X, y, stratify=y, test_size=0.2) stratify=y makes sure both sets have the same proportion of each class. Without it you might end up with a training set that barely sees the minority class and a model that has no idea it exists. The split is not a formality. It is a decision that shapes every result that follows. Get it right before you touch anything else. ❓What split ratio do you use for your projects and why? #DataScience #MachineLearning #Python
To view or add a comment, sign in
-
Shallow Copy vs Deep Copy — The 2 AM Bug Trap 🛑 Most developers think they understand copying objects, until their original data mysteriously changes. That’s not a bug, that’s memory behavior biting you. → Shallow Copy Creates a new container, but nested objects are still shared (by reference) 👉 Change nested data → both copies change. Best for: Flat, simple data. → Deep Copy Creates a completely independent clone, everything is copied recursively. 👉 Change anything → original stays untouched Best for: Complex, nested structures. 💡 Rule of Thumb Shallow → when you only need a surface-level copy Deep → when you need true isolation ⚠️ The real trap: Most bugs aren’t syntax errors. They come from not understanding how data behaves in memory. If you’ve ever spent hours debugging only to realize it was a shallow copy issue. Welcome to the club 😄 #Python #Python3 #Programming #SoftwareEngineering #CleanCode #Debugging #TechTips #PythonDeveloper #BackendDevelopment
To view or add a comment, sign in
-
-
I built a complete 𝗨𝘀𝗲𝗱 𝗖𝗮𝗿 𝗣𝗿𝗶𝗰𝗲 𝗣𝗿𝗲𝗱𝗶𝗰𝘁𝗼𝗿 from scratch, creating a full end-to-end pipeline that handles everything from raw data to a live application. Instead of relying on a pre-built dataset, I identified a unique problem and built my own data source using web scraping. My goal was to move beyond tutorials and mimic a real-world 𝗱𝗮𝘁𝗮 𝘀𝗰𝗶𝗲𝗻𝗰𝗲 workflow. • 𝗦𝗰𝗿𝗮𝗽𝗶𝗻𝗴: Automated data collection to get real-time market prices. • 𝗣𝗿𝗲𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴: Cleaning messy web data into a machine-learning-ready format. • 𝗠𝗼𝗱𝗲𝗹𝗶𝗻𝗴: Training a robust regressor to find the patterns. • 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁: Building a Flask web app to make the model accessible to anyone. The Workflow: 𝗦𝗰𝗿𝗮𝗽𝗲 𝗗𝗮𝘁𝗮 → 𝗖𝗹𝗲𝗮𝗻 & 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺 → 𝗧𝗿𝗮𝗶𝗻 𝗠𝗼𝗱𝗲l → 𝗗𝗲𝗽𝗹𝗼𝘆 #MachineLearning #DataScience #Python #Flask #WebScraping #PortfolioProject Check out the full documentation and code on GitHub: https://lnkd.in/gAZp4iKq
To view or add a comment, sign in
-
-
I just learned something that no LeetCode problem ever taught me. How do you sort 200 GB of data when your RAM is only 5 GB? 🤯 I came across this in a real interview question today — and honestly, I had no clue. The answer? External Merge Sort. Here's how it works in simple terms 👇 📦 Phase 1 — Break it down: • Read 5 GB of data into RAM • Sort it using QuickSort • Write it back to disk as a sorted "chunk" • Repeat 40 times → now you have 40 sorted files 🔀 Phase 2 — Merge using a Min-Heap: • Open all 40 files at once • Push the first element of each file into a Min-Heap (size = just 40!) • Pop the minimum → write to output → push next element from that file • Repeat until all 200 GB are merged The genius part? The heap never holds more than 40 elements at a time. Not 200 GB. Just 40. All those Heap and Merge Sort problems on LeetCode? This is exactly what they're preparing you for — just at a massive scale. This is why Big Tech companies ask System Design questions. Real-world data doesn't fit in an array. 🌍 📸 Attached the full Python implementation above — Phase 1 (Run Creation) + Phase 2 (K-Way Merge) with comments explaining every step. Drop a 🙋 if you had no idea this concept existed before today! And tell me — what's the most surprising DSA concept YOU'VE come across recently? 👇 #DSA #LeetCode #SystemDesign #SoftwareEngineering #Python #CodingInterview #ExternalSorting
To view or add a comment, sign in
-
-
Mistakes are part of the process Day 7 – #100DaysOfCode ⏰ Time Spent: 2 hours ⚒️ What I Did: * Yesterday I have learned one way to read scatter plots , today I Practiced that . * Modified my function to make it reusable * Plotted relationships between complaints and aggregated features I observed only these two trends: * log(x) vs y → logarithmic trend [ y = a · log(x) + b ] * log(x) vs log(y) → power law [ y = k · xᵃ ] But then I realized something important… I was plotting a sum on the x-axis, which naturally increases the values which created misleading patterns. So I switched to mean,but the trends disappeared. Which implies no relation but I'll experiment with few other transformations before I conclude that --- 🚪 Links: * Repo: [https://lnkd.in/g7zsMygp) --- 🧠 Learning: Bad feature choice can create fake patterns. 📌 Closing: Should try to work on these things when I am not tired ( Mornings / After a nap ) #DataScience #DataAnalytics #Python #CodingJourney
To view or add a comment, sign in
-
-
Hello dudes and dudettes!! 🚀 Day 12/150 — Solved LeetCode 380: Insert Delete GetRandom O(1) Today’s problem felt like a real brain workout 🧠 — not because it was long, but because it demanded the right idea. At first, it looks simple: 👉 Insert 👉 Delete 👉 Get Random But the catch? ⚡ All operations must run in O(1) time That’s where things get interesting. 🧠 Initial Thought Process Using a list? Insert ✅ Get random ✅ Remove ❌ (takes O(n)) Using a set? Insert ✅ Remove ✅ Get random ❌ So clearly… one data structure alone isn’t enough. 💡 The Breakthrough Moment The solution clicked when I realized: 👉 Why not combine the strengths of both? Use a list for fast random access Use a hash map for instant lookups This combination unlocks true O(1) performance for all operations. 🔥 The Most Interesting Part — Deletion Trick Normally, removing an element from a list is expensive because elements need to shift. But here’s the smart trick: 👉 Swap the element to be removed with the last element 👉 Remove the last element 👉 Update the index in the hash map That’s it. No shifting. No extra cost. 💥 Constant time deletion achieved. 📊 How It Works (Simple Flow) Imagine storing values like this: A list keeps all elements A map stores each value’s index Whenever you: Insert → add to list + store index Remove → swap + pop + update index GetRandom → pick directly from list Everything stays efficient and clean. 😎 What I Learned Sometimes one data structure isn’t enough — combining them is the real power Smart tricks (like swap & pop) can completely change time complexity Designing systems is more about thinking than coding 🎯 Key Takeaway “Efficiency isn’t about doing things faster… it’s about avoiding unnecessary work.” 🔥 Another solid step forward in the journey. On to the next challenge. #LeetCode #Algorithms #DataStructures #ProblemSolving #CodingJourney #100DaysOfCode #Python #LearningInPublic
To view or add a comment, sign in
-
-
🚀 𝗖𝗿𝗮𝗰𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 “𝗕𝗲𝘀𝘁 𝗧𝗶𝗺𝗲 𝘁𝗼 𝗕𝘂𝘆 𝗮𝗻𝗱 𝗦𝗲𝗹𝗹 𝗦𝘁𝗼𝗰𝗸” 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 💡 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗦𝘁𝗮𝘁𝗲𝗺𝗲𝗻𝘁 You’re given stock prices where: prices[i] = price on day i 👉 Goal: Buy once and sell once (in the future) to get maximum profit 📌 𝗘𝘅𝗮𝗺𝗽𝗹𝗲 Input: [4, 2, 3, 4, 5, 2] Output: 3 ✔ Buy at 2 → Sell at 5 🧠 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝟭: 𝗕𝗿𝘂𝘁𝗲 𝗙𝗼𝗿𝗰𝗲 (𝗢(n²)) Check every possible pair of buy & sell days ❌ Inefficient for large data ⚡ 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝟮: 𝗧𝘄𝗼 𝗣𝗼𝗶𝗻𝘁𝗲𝗿 / 𝗦𝗹𝗶𝗱𝗶𝗻𝗴 𝗪𝗶𝗻𝗱𝗼𝘄 (𝗢(n)) Track buy and sell pointers Update buy when a smaller price appears ✔ Better performance with linear time 🔥 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝟯: 𝗢𝗽𝘁𝗶𝗺𝗮𝗹 (𝗚𝗿𝗲𝗲𝗱𝘆 - 𝗢(n)) Track minimum price so far Calculate profit at each step ✔ Most efficient and clean solution 💻 𝗢𝗽𝘁𝗶𝗺𝗮𝗹 𝗖𝗼𝗱𝗲 def optimal_stock(prices): min_price = float("inf") max_profit = 0 for price in prices: min_price = min(min_price, price) profit = price - min_price max_profit = max(max_profit, profit) return max_profit 🔗 𝗚𝗶𝘁𝗛𝘂𝗯 𝗖𝗼𝗱𝗲: https://lnkd.in/g-iaHxs5 🎯 𝗞𝗲𝘆 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 Always track the minimum before maximum Greedy approach often gives optimal results in linear time 💬 𝗜𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 𝗧𝗶𝗽 Start with brute force → optimize step by step This shows strong problem-solving skills 💡 #DataStructures #Algorithms #CodingInterview #Python #LeetCode #SoftwareEngineering #ProblemSolving #GreedyAlgorithm
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development