From the course: Deep Learning with Python: Optimizing Deep Learning Models

Unlock this course with a free trial

Join today to access over 25,500 courses taught by industry experts.

Lasso and ridge regularization

Lasso and ridge regularization

- [Instructor] Regularization is a crucial technique employed to prevent overfitting. A scenario where a model learns the training data too well, including the noise and minor fluctuations that do not represent the true patterns. Overfitting leads to a model that performs well on training data, but struggles to generalize effectively to unseen data. To address this, L1 and L2 regularization are two widely used methods that add a penalty to the loss function during training, thereby encouraging simpler models and reduce the likelihood of overfitting. L1 regularization, also known as lasso regularization, modifies the loss function by adding the sum of the absolute values of the weights as a penalty term. Mathematically, L1 regularization is expressed as shown here, where L represents original loss function, lambda is a regularization parameter that controls the strength of the penalty, and wi are the weights or parameters of the model. By adding the absolute values of the weights, L1…

Contents