Machine Learning:Loss Functions

Published On 2022/06/08 Wednesday, Singapore

This post covers popular loss functions used in machine learning and deep learning models.

1. Cross Entropy Loss

Cross Entropy Loss function are widely used for classification algorithms. Algorithms that use cross entropy includes Logistic Regression, neural networks for classfication task.

Binary Cross Entropy Loss is the special case of cross entropy loss when the number of the class equals 2. Binary cross entropy loss is also called as Log loss, logarithmic loss or logistic loss.

\[\begin{align*} L = -\sum_{i=1}^m\big[y_ilog(\hat{y}_i)+(1-y_i)log(1-\hat{y}_i)\big] \end{align*}\]


Multiclass Cross Entropy Loss

\[\begin{align*} L = -\sum_{i=1}^m y_ilog(\hat{y}_i) \end{align*}\]



2. Hinge Loss

Hinge Loss was used for Support Vector Machine(Maximum-Margin Classification)

\[\begin{align*} L = max(0,1-y*\hat{y}_i) \end{align*}\]



3. Squared Error Loss

Squared Error Loss functions are used for regression algorithms.

Mean Squared Error

\[\begin{align*} MSE = -\sum_{i=1}^m\big(y_i-\hat{y}_i\big)^2 \end{align*}\]



Mean Absolute Error

\[\begin{align*} MAE = -\sum_{i=1}^m|y_i-\hat{y}_i| \end{align*}\]



Reference & Resources



💚 Back to Home