Pathwise Conditioning and Non-Euclidean Gaussian Processes

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=_5jiCtfzqdg



Duration: 1:06:57
1,475 views
43


A Google TechTalk, presented by Alexander Terenin, 2022-11-28
BayesOpt Speaker Series. ABSTRACT: In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.

Bio: Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models."




Other Videos By Google TechTalks


2023-03-07Zürich Go Meetup: Zero-effort Type-safe Parsing of JSON and XML
2023-03-07Zürich Go Meetup: Let’s Build a Game with Go
2023-03-07Zürich Go Meetup: Run Go programs on your Raspberry Pi with gokrazy!
2023-03-03Online Covering: Secretaries, Prophets and Universal Maps
2023-03-03Auto-bidding in Online Advertising: Campaign Management and Fairness
2023-03-03Tree Learning: Optimal Algorithms and Sample Complexity
2023-03-03A Fast Algorithm for Adaptive Private Mean Estimation
2023-02-13Piers Ridyard | CEO RDX Works | Radix Protocol | web3 talks | Dec 7th 2022 | MC: Blake DeBenon
2023-02-10Sergey Gorbunov | Co-Founder Axelar | web3 talks | Jan 26th 2023 | MC: Marlon Ruiz
2023-02-10Fast Neural Kernel Embeddings for General Activations
2023-02-10Pathwise Conditioning and Non-Euclidean Gaussian Processes
2023-02-10Privacy-Preserving Machine Learning with Fully Homomorphic Encryption
2023-01-19Charles Hoskinson | CEO of Input Output Global | web3 talks | Jan 5th 2023 | MC: Marlon Ruiz
2023-01-18Control, Confidentiality, and the Right to be Forgotten
2023-01-18The Saddle Point Accountant for Differential Privacy
2023-01-18Analog vs. Digital Epsilons: Implementation Considerations Considerations for Differential Privacy
2023-01-18Secure Self-supervised Learning
2023-01-18Example Memorization in Learning: Batch and Streaming
2023-01-18Marginal-based Methods for Differentially Private Synthetic Data
2023-01-18Private Convex Optimization via Exponential Mechanism
2023-01-18Differentially Private Multi-party Data Release for Linear Regression