From the course: Deep Learning with Python: Optimizing Deep Learning Models
Unlock this course with a free trial
Join today to access over 25,500 courses taught by industry experts.
Elastic Net regularization - Python Tutorial
From the course: Deep Learning with Python: Optimizing Deep Learning Models
Elastic Net regularization
- [Instructor] Elastic net regularization combines the penalties of both L one and L two regularization making it especially useful when dealing with data where some features are highly correlated or when neither L one nor L two regularization alone provides optimal results. The loss function for elastic net re regularization is defined as shown here where alpha controls the overall strength of the regularization and Rho is a mixing parameter between L one and L two regularization. Values for Rho between zero and one create a combination of both L one and L two. However, when Rho equals one, the effect will be the same as L one or lasso regularization, and when Rho equals zero, the effect will be the same as L two regularization. Essentially, elastic net regularization aims to leverage the benefits of both L one and L two regularization, by encouraging sparsity like L one for feature selection, ensuring that the model only uses the most relevant features and stabilizing the model like…
Contents
-
-
-
-
(Locked)
The bias-variance trade-off3m 33s
-
(Locked)
Lasso and ridge regularization3m 56s
-
(Locked)
Applying L1 regularization to a deep learning model3m 21s
-
(Locked)
Applying L2 regularization to a deep learning model3m 16s
-
(Locked)
Elastic Net regularization2m 29s
-
(Locked)
Dropout regularization2m 52s
-
(Locked)
Applying dropout regularization to a deep learning model3m 21s
-
(Locked)
-
-
-
-