Fast and Memory Efficient Differentially Private-SGD via JL Projections

Fast and Memory Efficient Differentially Private-SGD via JL Projections

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=VT6ws2ie-YI



Duration: 53:47
579 views
0


A Google TechTalk, presented by Sivakanth Gopi, 2021/05/21
ABSTRACT: Differential Privacy for ML Series. Differentially Private-SGD (DP-SGD) of Abadi et al. (2016) and its variations are the only known algorithms for private training of large scale neural networks. This algorithm requires computation of per-sample gradients norms which is extremely slow and memory intensive in practice. In this paper, we present a new framework to design differentially private optimizers called DP-SGD-JL and DP-Adam-JL. Our approach uses Johnson-Lindenstrauss (JL) projections to quickly approximate the per-sample gradient norms without exactly computing them, thus making the training time and memory requirements of our optimizers closer to that of their non-DP versions.

Unlike previous attempts to make DP-SGD faster which work only on a subset of network architectures, we propose an algorithmic solution which works for any network in a black-box manner which is the main contribution of this paper.

About the speaker: Sivakanth Gopi is a senior researcher in the Algorithms group at Microsoft Research Redmond. He is broadly interested in Theoretical Computer Science with a special focus on Coding Theory and Differential Privacy. He is part of the DNA Storage Project, Project Laplace (differential privacy) and Coding for Distributed Storage at Microsoft. He got his PhD from Princeton University in 2018. Before that, he graduated from IIT Bombay with a major in computer science and a minor in mathematics.




Other Videos By Google TechTalks


2021-10-06Greybeard Qualification (Linux Internals) part 3: Memory Management
2021-10-06Greybeard Qualification (Linux Internals) part 2 Execution, Scheduling, Processes & Threads
2021-10-06Greybeard Qualification (Linux Internals) part 6: Networking & Building a Kernel
2021-10-06Greybeard Qualification (Linux Internals) part 5: Block Devices & File Systems
2021-10-06Greybeard Qualification (Linux Internals) part 4: Startup and Init
2021-09-30A Regret Analysis of Bilateral Trade
2021-09-29CoinPress: Practical Private Mean and Covariance Estimation
2021-09-29"I need a better description": An Investigation Into User Expectations For Differential Privacy
2021-09-29On the Convergence of Deep Learning with Differential Privacy
2021-09-29A Geometric View on Private Gradient-Based Optimization
2021-09-29Fast and Memory Efficient Differentially Private-SGD via JL Projections
2021-07-13Efficient Exploration in Bayesian Optimization – Optimism and Beyond by Andreas Krause
2021-07-13Learning to Explore in Molecule Space by Yoshua Bengio
2021-07-13Resource Allocation in Multi-armed Bandits by Kirthevasan Kandasamy
2021-07-13Grey-box Bayesian Optimization by Peter Frazier
2021-06-10Is There a Mathematical Model of the Mind? (Panel Discussion)
2021-06-04Dataset Poisoning on the Industrial Scale
2021-06-04Towards Training Provably Private Models via Federated Learning in Practice
2021-06-04Breaking the Communication-Privacy-Accuracy Trilemma
2021-06-04Cronus: Robust Knowledge Transfer for Federated Learning
2021-06-04Private Algorithms with Minimal Space