Just completed the Gradient Descent lab in Andrew Ng's ML Specialization — and it genuinely clicked for me here. The concept: instead of guessing the best values for w and b in a linear model, gradient descent finds them automatically by repeatedly moving in the direction that reduces error. What I built from scratch in Python: ✅ compute_cost() — measures how wrong the model is ✅ compute_gradient() — calculates which direction to move ✅ gradient_descent() — runs 10,000 iterations to find optimal parameters What surprised me most: → Starting from w=0, b=0, the algorithm found w≈200, b≈100 for a house price dataset → The cost dropped rapidly at first, then slowed as it approached the minimum — exactly like rolling a ball to the bottom of a bowl → Setting the learning rate too high (α = 0.8) caused the model to completely diverge — cost shot up instead of down That last point was the most valuable. Seeing divergence visually made the theory real. Building these functions line by line beats reading about them any day. #MachineLearning #Python #AndrewNg #LearningInPublic #DataScience #GradientDescent
Gradient Descent Lab in Andrew Ng's ML Specialization
More Relevant Posts
-
Headline: Logic meets Code. 🧩💻 I just wrapped up another challenge on HackerRank focusing on Probability & Statistics—specifically calculating outcomes across multiple independent events. The task: Determining the exact probability of drawing a specific color combination from two different bags. While the math can be done on paper, translating these permutations and combinations into clean, efficient code is where the real fun is. Steps like these are small but vital foundations for building more complex machine learning models later on. Excited to keep this momentum going! #DataScience #Python #HackerRank #Statistics #ContinuousLearning #AI link of #Solution :- https://lnkd.in/gC9j7RgS
To view or add a comment, sign in
-
-
Day 72. Spent time going deeper into XGBoost today. Covered classification and worked through the math: gradients & hessian leaf weights similarity score & gain Some questions I tried to answer while learning: Why do we need Taylor expansion here? Why can’t we directly differentiate the objective? What makes decision trees non-smooth / non-differentiable? The key realization: since trees produce piecewise constant outputs, the loss surface isn’t smooth — which is why second-order approximation becomes necessary. Still revising, but things are starting to connect. Notes: https://lnkd.in/gCqHUeK9 #MachineLearning #XGBoost #LearningInPublic #Python #DataScience
To view or add a comment, sign in
-
📊 Understanding Joint Distributions in Probability Ever wondered how to model the relationship between two random variables? A joint distribution is the key! It describes the probability of two (or more) events happening simultaneously, giving us a complete picture of their interaction. In my latest Python experiment, I created a simple joint distribution table for two discrete variables, X and Y, representing the number of heads and tails in two coin flips. Here’s what I learned: Joint distribution tells us the probability of both X and Y taking specific values. Marginal distributions help us understand each variable independently. Conditional distributions show how one variable behaves given a specific value of the other. This concept is foundational in statistics, machine learning, and data science. It’s amazing how much insight we can gain from just a few lines of code! 🔗 Check out the code snippet in the comments if you’re curious to try it yourself. #Probability #Statistics #DataScience #Python #MachineLearning #Coding
To view or add a comment, sign in
-
Excited to share my latest project: LinearRegression-ML This is a beginner-friendly Machine Learning project focused on understanding and implementing Linear Regression from scratch. It includes practical notebooks like profit analysis and medical data predictions, along with clear explanations of loss and cost functions. ???What I learned =>Fundamentals of Linear Regression =>Cost & loss function implementation =>Real-world dataset analysis using Python #https://lnkd.in/guCQQdNe #MachineLearning #Python_Jupyter_Notebook #DataScience
To view or add a comment, sign in
-
-
Day 2 of learning Machine Learning. Today I worked on a simple linear regression model using Python in Jupyter Notebook. The idea was straightforward: - Input (x): house size - Output (y): price Model used: f(x) = wx + b I understood how: - Training data is structured (x_train, y_train) - Parameters (w, b) define the relationship - The model uses this to make predictions on new inputs Also got hands-on with NumPy and basic plotting using Matplotlib. Still very early, but it's becoming clearer how data is converted into predictions. #MachineLearning #AI #Python #LearningInPublic
To view or add a comment, sign in
-
-
#PrincipalComponentAnalysis (PCA) is more than just a technique for dimensionality reduction - it’s one of the most powerful applications of eigenanalysis in data science. By identifying the directions of maximum variance, PCA simplifies complex datasets while preserving their essential structure. What’s inside this guide: * The math: Covariance matrices and Eigen-decomposition. * The logic: From data centering to explained variance. * The code: Python realizations using NumPy and scikit-learn. Swipe through the carousel below to explore the mechanics of PCA! The link to the full #Medium article with complete code is in the first comment. #DataScience #MachineLearning #Python #LinearAlgebra #AI #STEM
To view or add a comment, sign in
-
Master Figures, Lines & Arrows in Matplotlib! The matplotlib module can plot geometric figures such as rectangles, circles, and triangles. These figures can then illustrate mathematical, technical, and physical relationships. This blog post demonstrates the creative options of matplotlib through three examples by illustrating the Pythagorean theorem: a gear representation, a pointer diagram, and a current-carrying conductor in a homogeneous magnetic field. #Python #DataViz #Matplotlib #CodeMagic #RheinwerkComputingBlog Dive in now and transform your graphs! https://hubs.la/Q04byPg90
To view or add a comment, sign in
-
-
Today, I focused on working with NumPy arrays. Building a solid foundation for data manipulation and analysis. Here’s what I practiced: 🔹 Created a 1D array with values from 1 to 15 🔹 Built a 2D array (3×4) filled with ones 🔹 Generated a 3×3 identity matrix 🔹 Explored key array properties like shape, type, and dimensions 🔹 Converted a regular Python list into a NumPy array This session helped me better understand how data is structured and handled in numerical computing. Getting comfortable with arrays is definitely a crucial step toward more advanced data analysis and machine learning tasks. Looking forward to building on this momentum 💡 #AI #MachineLearning #Python #NumPy #DataAnalysis #M4ACE
To view or add a comment, sign in
-
-
Day 7 / ∞ — Logistic Regression with Scikit-Learn Today's lab was all about classification basics: fitting a logistic regression model, making predictions, and calculating accuracy — all in just a few lines of Python. What stood out → scikit-learn abstracts away the math, but understanding what's happening under the hood (sigmoid function, decision boundaries) makes you a much better practitioner. The workflow is deceptively simple: → Prepare your feature matrix and labels → Fit the model → Predict and evaluate 100% accuracy on the training set sounds great until you remember that's 6 data points. Overfitting awareness starts early. One week in. The fundamentals are clicking. #MachineLearning #LogisticRegression #ScikitLearn #100DaysOfML
To view or add a comment, sign in
-
-
Recently, I implemented both Linear Regression and Logistic Regression from scratch using Python and NumPy, emphasizing vectorization and mathematical understanding. For both models, I utilized two feature vectors and created animated visualizations of the learning process, illustrating how the decision boundary (for Logistic Regression) and regression plane/line (for Linear Regression) evolve step by step during gradient descent. A major goal was to ensure the code was scalable, efficient, and mathematically transparent. Key aspects of the implementation include: • Fully vectorized code, avoiding unnecessary loops • Significant speed improvement due to vectorization, enhancing training efficiency • Generalization capability from 2 features to n-feature input spaces • Structured for straightforward future implementation of Lasso (L1) and Ridge (L2) regularization • Visualization currently limited to 2D feature space for interpretability Building these models from scratch deepened my understanding of the underlying mathematics, particularly gradient descent, cost functions, normalization, decision boundaries, and parameter updates. Writing the algorithm myself proved to be a more insightful learning experience than simply using a library. Code can be found at: https://lnkd.in/ghqdDxMg #MachineLearning #LinearRegression #LogisticRegression #LearningByBuilding
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development