Grey-box Bayesian Optimization by Peter Frazier

Grey-box Bayesian Optimization by Peter Frazier

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=6Q0HyWeNEoU



Duration: 1:17:30
1,962 views
0


A Google TechTalk, presented by Peter I. Frazier, 2021/06/08
ABSTRACT: Bayesian optimization is a powerful tool for optimizing time-consuming-to-evaluate non-convex derivative-free objective functions. While BayesOpt has historically been deployed as a black-box optimizer, recent advances show considerable gains by "peeking inside the box". For example, when tuning hyperparameters in deep neural networks to minimize validation error, state-of-the-art BayesOpt tuning methods leverage the ability to stop training early, restart previously paused training, perform training and testing on a strict subset of the available data, and warm-start from previously tuned network architectures. We describe new "grey box" Bayesian optimization methods that selectively exploit problem structure to deliver state-of-the-art performance. We then briefly describe applications of these methods to tuning deep neural networks, inverse reinforcement learning and calibrating physics-based simulators to observational data.

About the speaker: Peter Frazier is the Eleanor and Howard Morgan Professor of Operations Research and Information Engineering at Cornell University. He is also a Staff Data Scientist at Uber. He leads Cornell's COVID-19 Mathematical Modeling Team, which designed Cornell's testing strategy to support safe in-person education during the pandemic. His academic research during more ordinary times is in Bayesian optimization, incentive design for social learning and multi-armed bandits. At Uber, he managed UberPool's data science group and currently helps to design Uber's pricing systems.




Other Videos By Google TechTalks


2021-09-29CoinPress: Practical Private Mean and Covariance Estimation
2021-09-29"I need a better description": An Investigation Into User Expectations For Differential Privacy
2021-09-29On the Convergence of Deep Learning with Differential Privacy
2021-09-29A Geometric View on Private Gradient-Based Optimization
2021-09-29BB84: Quantum Protected Cryptography
2021-09-29Fast and Memory Efficient Differentially Private-SGD via JL Projections
2021-09-29Leveraging Public Data for Practical Synthetic Data Generation
2021-07-13Efficient Exploration in Bayesian Optimization – Optimism and Beyond by Andreas Krause
2021-07-13Learning to Explore in Molecule Space by Yoshua Bengio
2021-07-13Resource Allocation in Multi-armed Bandits by Kirthevasan Kandasamy
2021-07-13Grey-box Bayesian Optimization by Peter Frazier
2021-06-10Is There a Mathematical Model of the Mind? (Panel Discussion)
2021-06-04Dataset Poisoning on the Industrial Scale
2021-06-04Towards Training Provably Private Models via Federated Learning in Practice
2021-06-04Breaking the Communication-Privacy-Accuracy Trilemma
2021-06-04Cronus: Robust Knowledge Transfer for Federated Learning
2021-06-04Private Algorithms with Minimal Space
2021-06-04Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
2021-06-04Flower: A Friendly Federated Learning Framework
2021-06-04Lions, Skunks, and Kangaroos: Geo-Distributed Learning on the Flickr-Mammal Dataset
2021-06-04Secure Federated Learning in Adversarial Environments