From the course: Deep Learning with Python: Optimizing Deep Learning Models
Unlock this course with a free trial
Join today to access over 25,500 courses taught by industry experts.
Applying dropout regularization to a deep learning model - Python Tutorial
From the course: Deep Learning with Python: Optimizing Deep Learning Models
Applying dropout regularization to a deep learning model
- [Instructor] In this video, you will learn how to apply dropout regularization to a deep learning model in order to reduce overfitting. I'll be writing the code in the 02_07e file. You can follow along by completing the empty code cells in the 02_07b file. Make sure to run the previously written code to import and pre-process the data as well as to build and train the baseline model. I've already done so. Looking at the validation and training loss metrics curve, we see that the baseline model overfits against the training data. A clear indicator of overfitting is a divergence we see in the training and validation loss metrics, which is visible in the training curves above. To help minimize overfitting, let's try to apply dropout regularization to the baseline model. Dropout regularization randomly deactivates a fraction of neurons during training. This forces the network to learn robust features that do not depend too heavily on specific neurons. To apply dropout regularization to…
Contents
-
-
-
-
(Locked)
The bias-variance trade-off3m 33s
-
(Locked)
Lasso and ridge regularization3m 56s
-
(Locked)
Applying L1 regularization to a deep learning model3m 21s
-
(Locked)
Applying L2 regularization to a deep learning model3m 16s
-
(Locked)
Elastic Net regularization2m 29s
-
(Locked)
Dropout regularization2m 52s
-
(Locked)
Applying dropout regularization to a deep learning model3m 21s
-
(Locked)
-
-
-
-