From the course: Deep Learning with Python: Optimizing Deep Learning Models

Unlock this course with a free trial

Join today to access over 25,500 courses taught by industry experts.

Applying batch normalization to a deep learning model

Applying batch normalization to a deep learning model - Python Tutorial

From the course: Deep Learning with Python: Optimizing Deep Learning Models

Applying batch normalization to a deep learning model

- [Instructor] In this video, you'll learn how to apply batch normalization to a deep learning model. I'll be writing the code in the 05_02e file. You can follow along by completing the empty code cells in the 05_02b file. Note that this is the first in the three-video sequence that teaches you how to apply batch normalization, gradient clipping, early stopping, and learning rate scheduling to a deep learning model. Let's get started by running the previously written code to import and pre-process the data. So the first thing I need to do here is actually select the kernel for my environment. So Python Environments. I'm going to say Python 3.10. Now I'm going to click on my next code cell, and I'm going to say Run previous. Okay, so this is going to go ahead and run the code above to import and pre-process the data. Okay, so that is done. So our model consists of an input layer with 784 nodes, two hidden layers with 512 and 128 nodes respectively, and an output layer with 10 nodes…

Contents