Gradient Descent Lab in Andrew Ng's ML Specialization

Just completed the Gradient Descent lab in Andrew Ng's ML Specialization — and it genuinely clicked for me here. The concept: instead of guessing the best values for w and b in a linear model, gradient descent finds them automatically by repeatedly moving in the direction that reduces error. What I built from scratch in Python: ✅ compute_cost() — measures how wrong the model is ✅ compute_gradient() — calculates which direction to move ✅ gradient_descent() — runs 10,000 iterations to find optimal parameters What surprised me most: → Starting from w=0, b=0, the algorithm found w≈200, b≈100 for a house price dataset → The cost dropped rapidly at first, then slowed as it approached the minimum — exactly like rolling a ball to the bottom of a bowl → Setting the learning rate too high (α = 0.8) caused the model to completely diverge — cost shot up instead of down That last point was the most valuable. Seeing divergence visually made the theory real. Building these functions line by line beats reading about them any day. #MachineLearning #Python #AndrewNg #LearningInPublic #DataScience #GradientDescent

  • chart, radar chart

To view or add a comment, sign in

Explore content categories