The Asymptotic Performance of AdaBoost

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=gUcVR7f0WKs



Duration: 45:38
5,947 views
3


Google Tech Talks
May 24, 2007

ABSTRACT

Many popular classification algorithms, including AdaBoost and the support vector machine, minimize a cost function that can be viewed as a convex surrogate of the 0-1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. In this talk, we consider the universal consistency of such methods: does the risk, or expectation of the 0-1 loss, approach its optimal value, no matter what i.i.d. process generates. Credits: Speaker:Peter Bartlett







Tags:
google
howto
asymptotic
performance
adaboost