Best of Both World Algorithms from I.I.D. to Adversarial Data

Published on ● Video Link: https://www.youtube.com/watch?v=9wkwJYaxmgg



Duration: 37:36
381 views
5


Julian Zimmert (Google Research)
https://simons.berkeley.edu/talks/tbd-473
Quantifying Uncertainty: Stochastic, Adversarial, and Beyond

Most of machine learning theory assumes i.i.d. (independent identically distributed) data. However, the i.i.d. assumption is rarely satisfied in practice. Online learning departs from this assumption by changing the evaluation measure from expected error on future samples to regret with respect to the best prediction strategy in hindsight, out of a limited set of strategies. The revised evaluation measure allows to go as far as to consider adversarial environments, for example spam filtering or learning in two-player games. The ability to learn in adversarial environments comes at a price though. The convergence rate of algorithms for adversarial environments are typically much slower than the convergence rate of algorithms for i.i.d. environments. A practitioner thus has to make a choice between safe algorithms for adversarial environments with slow convergence rate and fast algorithms for i.i.d environments, with the implied risk of complete failure if the model assumptions are violated. But does the practitioner have to take the risk? Can we design algorithms that adapt to the nature of the environment?







Tags:
Simons Institute
theoretical computer science
UC Berkeley
Computer Science
Theory of Computation
Theory of Computing
Quantifying Uncertainty: Stochastic Adversarial and Beyond
Julian Zimmert