AdaBoost

Jump to navigation Jump to search

AdaBoost, short for Adaptive Boosting, is a machine learning algorithm, formulated by Yoav Freund and Robert Schapire. It is a meta-algorithm, and can be used in conjunction with many other learning algorithms to improve their performance. AdaBoost is adaptive in the sense that subsequent classifiers built are tweaked in favor of those instances misclassified by previous classifiers. AdaBoost is sensitive to noisy data and outliers. Otherwise, it is less susceptible to the overfitting problem than most learning algorithms.

AdaBoost calls a weak classifier repeatedly in a series of rounds <math> t = 1,\ldots,T</math>. For each call a distribution of weights <math>D_{t}</math> is updated that indicates the importance of examples in the data set for the classification. On each round, the weights of each incorrectly classified example are increased (or alternative, the weights of each correctly classified example are decreased), so that the new classifier focuses more on those examples.

The algorithm for the binary classification task

Given: <math>(x_{1},y_{1}),\ldots,(x_{m},y_{m})</math> where <math>x_{i} \in X,\, y_{i} \in Y = \{-1, +1\}</math>

Initialise <math>D_{1}(i) = \frac{1}{m}, i=1,\ldots,m.</math>

For <math>t = 1,\ldots,T</math>:

  • Find the classifier <math>h_{t} : X \to \{-1,+1\}</math> that minimizes the error with respect to the distribution <math>D_{t}</math>:
    <math>h_{t} = \arg \min_{h_{j} \in \mathcal{H}} \epsilon_{j}</math>, where <math> \epsilon_{j} = \sum_{i=1}^{m} D_{t}(i)[y_i \ne h_{j}(x_{i})]</math>
  • Prerequisite: <math>\epsilon_{t} < 0.5</math>, otherwise stop.
  • Choose <math>\alpha_{t} \in \mathbf{R}</math>, typically <math>\alpha_{t}=\frac{1}{2}\textrm{ln}\frac{1-\epsilon_{t}}{\epsilon_{t}}</math> where <math>\epsilon_{t}</math> is the weighted error rate of classifier <math>h_{t}</math>.
  • Update:

<math>D_{t+1}(i) = \frac{ D_{t}(i) \, e^{- \alpha_{t} y_{i} h_{t}(x_{i})} }{ Z_{t} }</math>
where <math>Z_{t}</math> is a normalization factor (chosen so that <math>D_{t+1}</math> will be a probability distribution, i.e. sum one over all x).

Output the final classifier:

<math>H(x) = \textrm{sign}\left( \sum_{t=1}^{T} \alpha_{t}h_{t}(x)\right)</math>

The equation to update the distribution <math>D_{t}</math> is constructed so that:

<math>e^{- \alpha_{t} y_{i} h_{t}(x_{i})} \begin{cases} <1, & y(i)=h_{t}(x_{i}) \\ >1, & y(i) \ne h_{t}(x_{i}) \end{cases}</math>

Thus, after selecting an optimal classifier <math>h_{t} \,</math> for the distribution <math>D_{t} \,</math>, the examples <math>x_{i} \,</math> that the classifier <math>h_{t} \,</math> identified correctly are weighted less and those that it identified incorrectly are weighted more. Therefore, when the algorithm is testing the classifiers on the distribution <math>D_{t+1} \,</math>, it will select a classifier that better identifies those examples that the previous classifer missed.

Statistical Understanding of Boosting

Boosting can be seen as minimization of a convex loss function over a convex set of functions. [1] Specifically, the loss being minimized is the exponential loss

<math>\sum_i e^{-y_i f(x_i)}</math>

and we are seeking a function

<math>f = \sum_t \alpha_t h_t</math>

See also

References

  1. T. Zhang, "Convex Risk Minimization", Annals of Statistics, 2004.

External links