Towards Training Provably Private Models via Federated Learning in Practice

Towards Training Provably Private Models via Federated Learning in Practice

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=2OhUkqUhufs



Duration: 13:27
207 views
0


A Google TechTalk, 2020/7/29, presented by Om Thakkar, Google
ABSTRACT: This talk is divided into two parts. In the first part, we see how several differing components of Federated Learning (FL) play an important role in reducing unintended memorization in trained models. Specifically, we observe that the clustering of data according to users---which happens by design in FL---has a significant effect in reducing such memorization, and using the method of Federated Averaging for training causes a further reduction. We also show that training with a strong user-level differential privacy guarantee results in models that exhibit the least amount of unintended memorization. In the next part of the talk, we focus on providing provable privacy guarantees for conducting iterative methods like Differentially Private Stochastic Gradient Descent (DP-SGD) in the setting of FL. We describe our random check-in distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client. It has privacy/accuracy trade-offs similar to privacy amplification by subsampling/shuffling. However, our method does not require server-initiated communication, or even knowledge of the population size. To our knowledge, this is the first privacy amplification tailored for a distributed learning framework, and it may have broader applicability beyond FL.

The first part of the talk is based on ""Understanding Unintended Memorization in Federated Learning"" which is joint work with Swaroop Ramaswamy, Rajiv Mathews, and Françoise Beaufays. The second part of the talk is based on ""Privacy Amplification via Random Check-Ins"", which is joint work with Borja Balle, Peter Kairouz, Brendan McMahan, and Abhradeep Thakurta."




Other Videos By Google TechTalks


2021-09-29A Geometric View on Private Gradient-Based Optimization
2021-09-29BB84: Quantum Protected Cryptography
2021-09-29Fast and Memory Efficient Differentially Private-SGD via JL Projections
2021-09-29Leveraging Public Data for Practical Synthetic Data Generation
2021-07-13Efficient Exploration in Bayesian Optimization – Optimism and Beyond by Andreas Krause
2021-07-13Learning to Explore in Molecule Space by Yoshua Bengio
2021-07-13Resource Allocation in Multi-armed Bandits by Kirthevasan Kandasamy
2021-07-13Grey-box Bayesian Optimization by Peter Frazier
2021-06-10Is There a Mathematical Model of the Mind? (Panel Discussion)
2021-06-04Dataset Poisoning on the Industrial Scale
2021-06-04Towards Training Provably Private Models via Federated Learning in Practice
2021-06-04Adaptive Federated Optimization
2021-06-04Orchard: Differentially Private Analytics at Scale
2021-06-04Workshop on Federated Learning and Analytics: Pre-recorded Talks Day 2 Track 2 Q&A Privacy/Security
2021-06-04Google Workshop on Federated Learning and Analytics: Talks from Google
2021-06-04TensorFlow Federated Tutorial Session
2021-06-04Learning on Large-Scale Data with Security & Privacy
2021-06-04Attack of the Tails: Yes, you Really can Backdoor Federated Learning
2021-06-04FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning
2021-06-04Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning
2021-06-04Domain Compression: A primitive for distributed inference under communication & privacy constraints