Insights on Gradient-Based Algorithms in High-Dimensional Learning

Published on ● Video Link: https://www.youtube.com/watch?v=rk7fIhCH8Gc



Duration: 59:46
1,604 views
39


Lenka Zdeborová (CEA Saclay)
Richard M. Karp Distinguished Lecture, Sep. 14, 2020
https://simons.berkeley.edu/events/rmklectures2020-fall-1

Gradient descent algorithms and their noisy variants, such as the Langevin dynamics or multipass stochastic gradient descent, are at the center of attention in machine learning. Yet their behavior remains perplexing, in particular in the high-dimensional nonconvex setting. In this talk, I will present several high-dimensional and (mostly) nonconvex statistical learning problems in which the performance of gradient-based algorithms can be analyzed down to a constant. The common point of these settings is that the data come from a probabilistic generative model leading to problems for which, in the high-dimensional limit, statistical physics provides exact closed solutions for the performance of the gradient-based algorithms. The covered settings include the spiked mixed matrix-tensor model, the perceptron, and phase retrieval.







Tags:
Simons Institute
Theory of Computing
Theory of Computation
Theoretical Computer Science
Computer Science
UC Berkeley
Lenka Zdeborová
Richard M. Karp Distinguished Lecture