From the course: Deep Learning with Python: Optimizing Deep Learning Models
Unlock this course with a free trial
Join today to access over 25,500 courses taught by industry experts.
Lasso and ridge regularization - Python Tutorial
From the course: Deep Learning with Python: Optimizing Deep Learning Models
Lasso and ridge regularization
- [Instructor] Regularization is a crucial technique employed to prevent overfitting. A scenario where a model learns the training data too well, including the noise and minor fluctuations that do not represent the true patterns. Overfitting leads to a model that performs well on training data, but struggles to generalize effectively to unseen data. To address this, L1 and L2 regularization are two widely used methods that add a penalty to the loss function during training, thereby encouraging simpler models and reduce the likelihood of overfitting. L1 regularization, also known as lasso regularization, modifies the loss function by adding the sum of the absolute values of the weights as a penalty term. Mathematically, L1 regularization is expressed as shown here, where L represents original loss function, lambda is a regularization parameter that controls the strength of the penalty, and wi are the weights or parameters of the model. By adding the absolute values of the weights, L1…
Contents
-
-
-
-
(Locked)
The bias-variance trade-off3m 33s
-
(Locked)
Lasso and ridge regularization3m 56s
-
(Locked)
Applying L1 regularization to a deep learning model3m 21s
-
(Locked)
Applying L2 regularization to a deep learning model3m 16s
-
(Locked)
Elastic Net regularization2m 29s
-
(Locked)
Dropout regularization2m 52s
-
(Locked)
Applying dropout regularization to a deep learning model3m 21s
-
(Locked)
-
-
-
-