We've established the principles of state, primitives, and type integrity. Now, you will apply them. #ZeroToFullStackAI Day 4/135: The First Challenge. Theory is useless without application. For the past three days, we have built the foundation. Today, you will build on it. Your brief is to build a "Simple Revenue Calculator." This will test your grasp of: State: Storing data in variables. Primitives: Using the correct int, float, and str types. Type Integrity: Handling the TypeError that will absolutely break your code if you are not careful. The Challenge Requirements: Ask the user for "Units Sold" (it will be a str). Ask the user for "Price Per Unit" (it will also be a str). Correctly convert these inputs to the right numeric types (int for units, float for price). Calculate the total_revenue (Units * Price). Print a clean, formatted f-string: Total Revenue: ₹1,250.50 This is a direct test of Day 3's "Type Integrity" principle. Post your complete solution code in the comments. 👇 I will review the submissions and post my own production-ready solution tomorrow. #Python #DataScience #SoftwareEngineering #AI #Developer #Challenge
Sumit Kumar’s Post
More Relevant Posts
-
Day 9 of #100DaysOfLeetCode Today’s problem, Minimum One Bit Operations to Make Integers Zero, was a fascinating dive into bit manipulation and recursive logic — combining mathematical patterns with binary reasoning. 1️⃣ Minimum One Bit Operations to Make Integers Zero The problem required transforming a given integer n into 0 using a specific set of bit operations. Each operation flips bits under strict conditions, making the challenge revolve around understanding binary transitions. 🔹 My Approach: Recognized that the problem follows a Gray code pattern, where each number’s transition to zero corresponds to flipping bits in a mirrored recursive manner. Used recursion to compute results efficiently by breaking down the number into its most significant bit and combining partial results through bitwise operations. Leveraged logarithmic computation to identify the position of the highest set bit, minimizing redundant calculations. What I Learned: This problem strengthened my grasp of bitwise recursion and mathematical patterns in binary systems. It was a great example of how deep understanding of binary representation can simplify seemingly complex transformation problems. Complexity Analysis: Time Complexity: O(log n) — each step reduces the problem size by one bit. Space Complexity: O(log n) — due to recursive calls. #100days #striver #leetcode #100daysofleetcode #DSA #AI #python #learning #consistent
To view or add a comment, sign in
-
-
🚀 𝗗𝗮𝘆 𝟲𝟬 𝗼𝗳 𝟯𝟲𝟱 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲! Today's leetcode daily challenge: Maximize the frequency of an element in an array after performing a limited number of increment operations. 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: 1780. Maximum Frequency of an Element After Performing Operations 🔗 𝐋𝐢𝐧𝐤: https://lnkd.in/ggqpt8kF ✔️ 𝗠𝘆 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵: - Sort the array to facilitate sliding window operations. - Use binary search (bisect_left) to find valid subarray bounds for transformation with the given k. - For each target, compute the current frequency in the valid range and adjust with numOperations. - Track the maximum possible frequency across all targets. ⏱️ 𝐓𝐢𝐦𝐞 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲: O(n log n) 📦 𝐒𝐩𝐚𝐜𝐞 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲: O(n) 📝 Shared handwritten notes & code as well for deeper reference. Use this sliding window with binary search approach to efficiently maximize element frequency with limited operations! #LeetCode365 #Day60 #MaxFrequency #BinarySearch #SlidingWindow #ProblemSolving #Python #GenAI #AI #ML #DL #AgenticAI
To view or add a comment, sign in
-
From 38% to 60% F1-Score: Lessons from Building My First Production-Ready ML Pipeline One month ago, I started a project to predict bank term deposit subscriptions. The goal was straightforward: help banks target the right customers and optimize their marketing spend. My first models—Naive Bayes and Decision Trees—landed at 38% F1-Score. Not great, but it gave me a baseline. Then I began applying what I’ve been learning through Interview Kickstart’s AIML program: Random Forest, XGBoost, and LightGBM lifted performance by over 20%. Bayesian hyperparameter tuning with Optuna added another 2–5% per model. Threshold optimization for business metrics made the final breakthrough. After three weeks of iteration: F1-Score: 60.33% ROC-AUC: 95.22% Accuracy: 91.90% But the real insights came from the process: XGBoost offered the best balance between precision and recall. LightGBM trained 3x faster and delivered the most reliable probability estimates. Threshold tuning had more impact than another round of hyperparameter search. Modular code made experimentation dramatically easier. This project reminded me that model optimization isn’t just about stacking algorithms—it’s about building infrastructure that supports iteration, understanding trade-offs, and testing systematically. Coming from a QA engineering background, that mindset translated naturally into ML development. The fundamentals—validation, iteration, and clarity—still apply. You can find the full code and documentation here: 🔗 GitHub: https://lnkd.in/gva4E2Qb 🔗 Project page: https://lnkd.in/gT73FFyY What’s one lesson from your recent ML work that changed how you build? #MachineLearning #DataScience #Python #CareerTransition #MLEngineering #MachineLearning #ModelOptimization #MLProjects
To view or add a comment, sign in
-
-
A "for" loop works on a known quantity. A while loop works until a condition is met. One is for data; the other is for state. #ZeroToFullStackAI Day 10/135: The 'while' Loop (Conditional Iteration). Yesterday, we built an engine (the for loop) to iterate over a finite collection—like a list of prices. But what if you don't know how long something will take? How long until a user enters "quit"? How long until a web service is "online"? How long until a game level is "complete"? This requires a different kind of engine: the while loop. A while loop doesn't iterate over a collection. It iterates as long as a specific condition remains True. It's the mechanism we use to build listeners, game loops, and services that "poll" for a status change. This is our tool for managing iteration based on an unknown and conditional future state. We've mastered our iteration engines. But our List data structure is slow for finding data. Tomorrow, we build a new structure for high-speed lookups: The Dictionary. #Python #DataScience #SoftwareEngineering #AI #Developer #Automation
To view or add a comment, sign in
-
-
🚀 From Curiosity to Deployment: My First End-to-End ML System I still remember about a month ago when my friend Isaak Kamau asked me: “How do you make sure that a mama mboga who knows nothing about Jupyter notebooks or Python scripts can actually use your model?” That question changed everything. It sparked my curiosity to go beyond building models, to think about deployment, usability, and real-world impact. Over the past few weeks, I’ve been working on a Temperature Anomaly Detection System for a fictional cold storage facility. This time, I approached it differently, with the mindset that the client should see and trust how their goods are performing in real time. 💡 I built and deployed: - An LSTM-based anomaly detection model using FastAPI for backend inference - A Streamlit dashboard that displays real-time temperature readings and anomaly alerts - A SQLite database for persistent storage 🧠 This project taught me how machine learning meets software engineering - bridging data, models, and user experience into one system. 🌐 Explore the full project here: 🔹 Live Dashboard: https://lnkd.in/eN8H5ENe 🔹 API Endpoint: https://lnkd.in/eh7EUv-Z 🔹 Source Code: https://lnkd.in/edzKrDnm This journey started from a simple question, but it reshaped how I think about data products — not just as models, but as solutions people can actually use. #MachineLearning #FastAPI #Streamlit #DataScience #ModelDeployment #Python #MLOps #EndToEndML #Innovation
To view or add a comment, sign in
-
Ever felt lost in a jungle of nested if-else statements? 🌪️ Picture this: you're leading a project relying on intricate business logic. Instead of facepalming at an endless maze of "ifs," you discover there’s a simpler, clearer way to handle rules using a rules engine. But, how do you build one from scratch? By starting with something we all learned in school—truth tables. While they might seem daunting due to their exponential growth in size, what if I told you they're often just sparse matrices hidden under the surface? Transforming these tables into a compact representation opens the door to creating a lightweight rules engine that can help streamline complex logic without the headaches. With the right techniques, you can easily manipulate these sparse representations for efficient inference, avoiding that overwhelming table dilemma. Imagine your Python code robustly handling business logic while being elegant and efficient! ✅ Build a more intuitive logic engine with state vectors. 🧩 Use the vector-logic library to simplify complex logical expressions. 🔧 Master set operations to make your data manipulations seamless. What if we could approach complex logic like a simple algebra problem? 🤔 What’s the most convoluted piece of logic you’ve had to untangle in your projects? #DataScience #Python #RulesEngine #Logic #DataAnalysis #PythonProgramming #AI #MachineLearning
To view or add a comment, sign in
-
-
🌐 Regularization techniques in Linear Regression Objective: specifically Ridge and Lasso Regression — to prevent overfitting and improve model performance on noisy data. What I Explored: 1. Generated a synthetic dataset using NumPy to simulate linear relationships with added noise. 2. Implemented Simple Linear Regression, Ridge Regression, and Lasso Regression using scikit-learn. 3. Evaluated model performance with metrics such as: - Mean Absolute Error (MAE) - Mean Squared Error (MSE) - Root Mean Squared Error (RMSE) - R² and Explained Variance Score 4. Compared how regularization parameters (alpha) influence bias–variance trade-offs. 5. Visualized the regression lines and observed how Ridge and Lasso shrink coefficients to reduce overfitting. Conclusion: Ridge Regression penalizes large coefficients, reducing overfitting. Lasso Regression can shrink some coefficients to zero, performing feature selection automatically. These methods make linear regression models more robust and reliable for real-world data. GitHub Link: https://lnkd.in/dpMuWHDx #MachineLearning #DataScience #LinearRegression #Regularization #RidgeRegression #LassoRegression #Python #ScikitLearn #MLModels #DataAnalytics #AI #Coding #GitHub #MLProjects
To view or add a comment, sign in
-
💡 Your linear model failing? Here's why 👇 When your data curves, bends, or twists, simple Linear Regression just can’t capture those curves. The result? High error rates and poor predictions. The solution: Polynomial Regression 📈 Think of it as Linear Regression's more flexible cousin. Instead of just using x, we add powers of x (x², x³, etc.) The degree controls this complexity: Degree 1 → Linear (straight line) Degree 2 → Quadratic (one curve) Degree 3 → Cubic (more curves) But here's the catch ⚠️ → Too high degree = Overfitting (memorizes noise) → Too low degree = Underfitting (misses patterns) → Just right = Perfect balance 🎯 I’ve written out all the key formulas in the Colab notebook — so you can visualize how the math evolves from linear to higher-degree curves. Working with multiple variables? Try Multiple Polynomial Regression. Python makes this incredibly easy with Pipelines — combining PolynomialFeatures + LinearRegression in one clean workflow. 🔗 Check out the full Colab notebook with formulas + working code examples (link in comments) #MachineLearning #DataScience #Python #Regression #PolynomialRegression #AI #Polynomial
To view or add a comment, sign in
-
Imagine knowing which customers are likely to leave — before they do. That’s what I explored in my latest Customer Churn Prediction project 📊 Built a full ML pipeline — data cleaning, feature engineering, model comparison, and a Streamlit dashboard that predicts churn probability in real-time. 🎯 Logistic Regression emerged as the best model with ROC-AUC = 0.86 A great exercise in turning data into actionable business insights! Github Link: https://lnkd.in/gEsKyKHX #DataScience #MachineLearning #CustomerChurn #Streamlit #Python #AI
To view or add a comment, sign in
More from this author
-
The Rising Threat of AI-Led Cyber Attacks in Transportation Systems
Sumit Kumar 1mo -
Navigating the AI Wave: Prudence for Retail Investors Amidst IMF's "Echoes of Dot-Com" Warning
Sumit Kumar 6mo -
Navigating the Rupee Plunge: Empowering Indian Students to Overcome Global Financial Challenges
Sumit Kumar 1y
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development