Directions in ML: Taking Advantage of Randomness in Expensive Optimization Problems

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=PMTJIQ_Q5_0



Duration: 1:00:54
2,047 views
54


Optimization is at the heart of machine learning, and gradient computation is central to many optimization techniques. Stochastic optimization, in particular, has taken center stage as the principal method of fitting many models, from deep neural networks to variational Bayesian posterior approximations. Generally, one uses data subsampling to efficiently construct unbiased gradient estimators for stochastic optimization, but this is only one possibility. In this talk, I discuss two alternative approaches to constructing unbiased gradient estimates in machine learning problems. The first approach uses randomized truncation of objective functions defined as loops or limits. Such objectives arise in settings ranging from hyperparameter selection, to fitting parameters of differential equations, to variational inference using lower bounds on the log-marginal likelihood. The second approach revisits the Jacobian accumulation problem at the heart of automatic differentiation, observing that it is possible to collapse the linearized computational graph of, e.g., deep neural networks, in a randomized way such that less memory is used but little performance is lost. These projects are joint work with students Alex Beatson, Deniz Oktay, Joshua Aduol, and Nick McGreivy.

Learn more about the 2020-2021 Directions in ML: AutoML and Automating Algorithms virtual speaker series: https://aka.ms/diml




Other Videos By Microsoft Research


2021-03-29Avatars: Finding a sense of self and others in the virtual world
2021-03-25In pursuit of responsible AI: Bringing principles to practice
2021-03-25Fairness-related harms in AI systems: Examples, assessment, and mitigation
2021-03-25Enhancing mobile work and productivity with virtual reality
2021-03-23Mixed reality and robotics: Unlocking more intuitive human-machine collaboration
2021-03-23Project InnerEye: Augmenting cancer radiotherapy workflows with deep learning and open source
2021-03-23AI advances in image captioning: Describing images as well as people do
2021-03-17Reinforcement learning in Minecraft: Challenges and opportunities in multiplayer games
2021-03-17Microsoft Vision Model ResNet-50: Pretrained vision model built with web-scale data
2021-03-11A Tale of Two Cities: Software Developers in Practice During the COVID-19 Pandemic
2021-03-08Directions in ML: Taking Advantage of Randomness in Expensive Optimization Problems
2021-03-08AI and Gaming Research Summit 2021 - Fireside chat with Peter Lee and Kareem Choudhry
2021-03-08AI and Gaming Research Summit 2021 - Computational Creativity (Day 1 Track 2.2)
2021-03-08AI and Gaming Research Summit 2021 - Computational Creativity (Day 1 Track 2.1)
2021-03-08AI and Gaming Research Summit 2021 - AI Agents (Day 1 Track 1.2)
2021-03-08AI and Gaming Research Summit 2021 - AI Agents (Day 1 Track 1.1)
2021-03-08AI and Gaming Research Summit 2021 – Welcome and Microsoft Plenary with Phil Spencer, Katja Hofmann
2021-03-08AI and Gaming Research Summit 2021: Responsible Gaming (Day 2 Track 2.2)
2021-03-08AI and Gaming Research Summit 2021: Responsible Gaming (Day 2 Track 2.1)
2021-03-08AI and Gaming Research Summit 2021 - Understanding Players (Day 2 Track 1.2)
2021-03-08AI and Gaming Research Summit 2021 - AI Agents (Day 2 Track 1.1)



Tags:
AutoML
Automating Algorithms
Professor Ryan Adams
Princeton University
Expensive Optimization Problems
gradient computation
unbiased gradient estimates
machine learning problems
Jacobian accumulation problem
Microsoft Research
ML