Machine Learning:SVM Mathematics

Published On 2022/01/08 Saturday, Singapore

This post covers the mathematics behind Support Vector Machine(SVM). Specifically, it covers the following:

  1. Margin and Support Vector

1. Margin and Support Vector

Assume we have the training dataset $D={(\mathbf{x}_1,y_1),…,(\mathbf{x}_n,y_n)}$, Where $(\mathbf{x}_i,y_i)$ represents a single sample。$\mathbf{x}_i$ represents features where $\mathbf{x}_i\in \mathbb{R}^m$ and $m$ is the number of feature variables. $y_i$ represents the target variable(label) with the value of $0$ or $1$ and $y_i\in \mathbb{R}$.

Support Vector Machine construct a hyper-plane in a high dismensional space to seperate the examples in different classes $y_i = 0$ vs $1$. Here the dimension of the space is $m$. The hyper-plane can be described with the following:

\[\mathbf{x}^T\mathbf{w} +b=0\]

Where $\mathbf{w}$ is weight vector, it determines the orientation of the hyper-plane, and $b$ is bias, it determines the location of the hyper-plane,



Reference & Resources





💚 Back to Home