High-Dimensional Prediction for Sequential Decision Making

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=hs43OwiRtAg



Duration: 49:05
550 views
13


A Google TechTalk, presented by Georgy Noarov, 2023-11-16
Google Algorithms Seminar - ABSTRACT: In predictions-to-decisions pipelines, statistical forecasts are useful insofar as they are a trustworthy guide to downstream rational decision making.

Towards this, we study the problem of making online predictions of an adversarially chosen high-dimensional state that are \emph{unbiased} subject to an arbitrary collection of conditioning events, with the goal of tailoring these events to downstream decision makers who will use these predictions to inform their actions. We give efficient general algorithms for solving this problem, and a number of applications that stem from choosing an appropriate set of conditioning events, including:
(1) tractable algorithms with strong no-regret guarantees over large action spaces, (2) a high-dimensional best-in-class (omniprediction) result, (3) fairness guarantees of various flavors, and (4) a novel framework for uncertainty quantification in multiclass settings.

For example, we can efficiently produce predictions targeted at any polynomially many decision makers, offering each of them optimal swap regret if they simply best respond to our predictions. Generalizing this, in the online combinatorial optimization setting we obtain the first efficient algorithms that guarantee, for up to polynomially many decision makers, no regret on any polynomial number of {subsequences} that can depend on their actions as well as any external context. As an illustration, for the online routing problem this easily implies --- for the first time --- efficiently obtainable no-swap-regret guarantees over all exponentially many paths that make up an agent's action space (and this can be obtained for multiple agents at once); by contrast, prior no swap regret algorithms such as Blum-Mansour would be intractable in this setting as they need to enumerate over the exponentially large action space. Moreover, our results imply novel regret guarantees for extensive-form games.

Turning to uncertainty quantification in ML, we show how our methods let us estimate (in the online adversarial setting) multiclass/multilabel probability vectors in a transparent and trustworthy fashion: in particular, downstream prediction set algorithms (i.e., models that predict multiple labels at once rather than a single one) will be incentivized to simply use our estimated probabilities as if they were the true conditional class probabilities, and their predictions will be guaranteed to satisfy multigroup fairness and other "conditional coverage" guarantees. This gives a powerful new alternative to well-known set-valued prediction paradigms such as conformal and top-K prediction. Moreover, our predictions can be guaranteed to be "best-in-class" --- i.e. to beat any polynomial collection of other (e.g. NN-based) multiclass vector predictors, simultaneously as measured by all Lipschitz Bregman loss functions (including L2 loss, cross-entropy loss, etc.). This can be interpreted as a high-dimensional omniprediction result.

ABOUT THE SPEAKER: George Noarov is a CS PhD student at the University of Pennsylvania, advised by Michael Kearns and Aaron Roth. He is broadly interested in theoretical CS and ML, with particular focus on fair/robust/trustworthy ML, online learning, algorithmic game theory, and uncertainty quantification. He obtained his B.A. in Mathematics from Princeton University, where his advisors were Mark Braverman and Matt Weinberg. He has received several academic awards, and his research has been supported by an Amazon Gift for Research in Trustworthy AI.




Other Videos By Google TechTalks


2024-04-12Charles Hoskinson | CEO of Input Output Global | web3 talks | Apr 4th 2024 | MC: Marlon Ruiz
2024-04-08Limitations of Stochastic Selection with Pairwise Independent Priors
2024-04-02NASA CARA - Air Traffic Control in Spaaaaaaaace
2024-03-28How Your Brain Processes Code
2024-03-25Fixed-point Error Bounds for Mean-payoff Markov Decision Processes
2024-03-19One Tree to Rule Them All: Polylogarithmic Universal Steiner Trees
2024-01-26Understanding Oversmoothing in Graph Neural Networks (GNNs): Insights from Two Theoretical Studies
2023-12-05Socially Responsible Software Development (Teaching Software Design Systematically)
2023-12-04Understanding and Mitigating Copying in Diffusion Models
2023-12-04Efficient Training Image Extraction from Diffusion Models Ryan Webs
2023-11-30High-Dimensional Prediction for Sequential Decision Making
2023-09-01Representational Strengths and Limitations of Transformers
2023-09-01Steven Goldfeder | CEO Offchain Labs / Arbitrum | web3 talks | Aug 24 2023 | MC: Marlon Ruiz
2023-08-29Differentially Private Sampling from Distributions
2023-07-14Revisiting Nearest Neighbors from a Sparse Signal Approximation View
2023-07-032023 Blockly Developer Summit Day 2-5: Plug-ins Demonstration
2023-07-032023 Blockly Developer Summit DAY 1-5: The Future of Computational Thinking
2023-07-032023 Blockly Developer Summit DAY 1-7: Cubi - Extending Blockly for Teachers
2023-07-032023 Blockly Developer Summit DAY 1-12: Serialization and Visual Diff
2023-07-032023 Blockly Developer Summit Day 2-2: Blockly Themes for Accessibility
2023-07-032023 Blockly Developer Summit DAY 1-14: BlocksCAD - Math + Coding + Design