From the course: Deep Learning with Python: Optimizing Deep Learning Models
Unlock this course with a free trial
Join today to access over 25,500 courses taught by industry experts.
Batch gradient descent - Python Tutorial
From the course: Deep Learning with Python: Optimizing Deep Learning Models
Batch gradient descent
- [Instructor] In deep learning, optimization algorithms play a fundamental role in how neural networks are trained. They govern how the weights and biases of a model are updated during each iteration of training to minimize the loss function. By iteratively adjusting parameters based on the gradient of the loss function, these algorithms aim to find the optimal values that yield the best predictions. Kairos provides a variety of optimization algorithms, ranging from simple gradient-based methods to more advanced adaptive approaches. Each method has its unique strengths and limitations, and understanding these differences is essential for choosing the right approach. One of the most fundamental optimization algorithms is batch gradient descent. It calculates the gradients of the loss function Using the entire training dataset in a single pass, then uses this gradient to update the model's parameters. Imagine trail running downhill. Batch gradient descent calculates the best path down…
Contents
-
-
-
-
-
(Locked)
Common loss functions in deep learning5m 4s
-
(Locked)
Batch gradient descent3m 32s
-
(Locked)
Stochastic gradient descent (SGD)2m 55s
-
(Locked)
Mini-batch gradient descent3m 37s
-
(Locked)
Adaptive Gradient Algorithm (AdaGrad)4m 43s
-
(Locked)
Root Mean Square Propagation (RMSProp)2m 40s
-
(Locked)
Adaptive Delta (AdaDelta)1m 47s
-
(Locked)
Adaptive Moment Estimation (Adam)3m 8s
-
(Locked)
-
-
-