Hinge loss

cosmos 29th November 2017 at 12:46pm
Classification Loss function

https://en.m.wikipedia.org/wiki/Hinge_loss Video

A Loss function that appears in Max-margin learning classifiers, such as soft Support vector machines. It has the form:

imax{0,1yiy~i}\sum_i \max{\{0,1-y_i \tilde{y}_i\}}

Where y~i\tilde{y}_i is the "raw" output of the classifier's decision function, not the predicted class label. This often is interpreted as the probability of the class. For SVMs, y~i=wTxi\tilde{y}_i = \mathbf{w}^T \mathbf{x}_i

This means that getting something correct by a large margin (, is not rewarded much, but getting something wrong by a large margin is penalized a lot.

Hinge loss and some approximations to it. The hinge loss can also be considered an approximation to the 0-1 loss, or classification error. On the other hand, the Logistic regression loss function can be seen as a smooth version of the Hinge loss.