Cronus: Robust Knowledge Transfer for Federated Learning

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=GJyXf2xtTo4



Duration: 20:56
533 views
10


A Google TechTalk, 2020/7/29, presented by Reza Shokri, National University of Singapore
ABSTRACT: Federated learning is vulnerable to many known privacy and security attacks. Shared parameters leak a significant amount of information about the participants’ private datasets, as they continuously expose all the internal state of local models to the attackers. Besides, federated learning is severely vulnerable to poisoning attacks, where some participants can adversarially influence the aggregate parameters. Overwriting local models with the global model, to initiate them before each round of local training, increases the influence of adversarial participants on others’ models. Attacker exploits the vulnerability of parameter aggregation methods which cannot provide tight error guarantees for high dimensional parameters. Knowledge transfer using parameter sharing also restricts the network to homogeneous model architectures, and limits model personalization.

In this talk, we present Cronus to address these issues. The simple yet effective idea behind the design of Cronus is a robust knowledge transfer algorithm. Local models share their predictions on a public dataset with a server that aggregates the predictions. All local models are trained on their private data as well as the public data with aggregate labels. This knowledge transfer through black-box predictions reduces information leakage about private data, enables aggregation of knowledge across models with different architectures, enables further personalization of models by local data, and more importantly enables robust aggregation with a significantly tight error bound (due to the low-dimensionality of the model outputs).




Other Videos By Google TechTalks


2021-09-29A Geometric View on Private Gradient-Based Optimization
2021-09-29BB84: Quantum Protected Cryptography
2021-09-29Fast and Memory Efficient Differentially Private-SGD via JL Projections
2021-09-29Leveraging Public Data for Practical Synthetic Data Generation
2021-07-13Efficient Exploration in Bayesian Optimization – Optimism and Beyond by Andreas Krause
2021-07-13Learning to Explore in Molecule Space by Yoshua Bengio
2021-07-13Resource Allocation in Multi-armed Bandits by Kirthevasan Kandasamy
2021-07-13Grey-box Bayesian Optimization by Peter Frazier
2021-06-10Is There a Mathematical Model of the Mind? (Panel Discussion)
2021-06-04Dataset Poisoning on the Industrial Scale
2021-06-04Cronus: Robust Knowledge Transfer for Federated Learning
2021-06-04Adaptive Federated Optimization
2021-06-04Orchard: Differentially Private Analytics at Scale
2021-06-04Byzantine-Resilient High-Dimensional SGD with Local Iterations on Heterogenous Data
2021-06-04Workshop on Federated Learning and Analytics: Pre-recorded Talks Day 2 Track 2 Q&A Privacy/Security
2021-06-04Google Workshop on Federated Learning and Analytics: Talks from Google
2021-06-04Generative Models for Effective ML on Private, Decentralized Datasets
2021-06-04TensorFlow Federated Tutorial Session
2021-06-04Learning on Large-Scale Data with Security & Privacy
2021-06-04Attack of the Tails: Yes, you Really can Backdoor Federated Learning
2021-06-04FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning