From the course: Deep Learning with Python: Optimizing Deep Learning Models
Unlock this course with a free trial
Join today to access over 25,500 courses taught by industry experts.
Common loss functions in deep learning - Python Tutorial
From the course: Deep Learning with Python: Optimizing Deep Learning Models
Common loss functions in deep learning
- [Instructor] In machine learning, a loss function is a mathematical function that quantifies the error or difference between the predicted outputs of a model, and the actual target values in the training data. In deep learning, loss functions serve as the foundation for training neural networks, as they provide the feedback or error necessary for the optimization process to update the model's parameters, which are the weights and the biases. By minimizing the value of the loss function, the model learns to make predictions that are increasingly accurate over time. Selecting an appropriate loss function is crucial, because it directly influences how a model learns and performs on specific tasks. Such regression, binary classification, or multi-class classification. For regression tasks where the goal is to predict continuous values, the mean squared error, or MSE loss function is a common choice. MSE calculates the average of the square differences between the predicted values and…
Contents
-
-
-
-
-
(Locked)
Common loss functions in deep learning5m 4s
-
(Locked)
Batch gradient descent3m 32s
-
(Locked)
Stochastic gradient descent (SGD)2m 55s
-
(Locked)
Mini-batch gradient descent3m 37s
-
(Locked)
Adaptive Gradient Algorithm (AdaGrad)4m 43s
-
(Locked)
Root Mean Square Propagation (RMSProp)2m 40s
-
(Locked)
Adaptive Delta (AdaDelta)1m 47s
-
(Locked)
Adaptive Moment Estimation (Adam)3m 8s
-
(Locked)
-
-
-