On the Convergence of Deep Learning with Differential Privacy

On the Convergence of Deep Learning with Differential Privacy

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=K91CvAz_iK0



Duration: 47:23
991 views
0


A Google TechTalk, presented by Zhiqi Bu, 2021/07/02
ABSTRACT: Differential Privacy for ML Series. In deep learning with differential privacy (DP), the neural network achieves the privacy usually at the cost of slower convergence (and thus lower performance) than its non-private counterpart. This work gives the first convergence analysis of the DP deep learning, through the lens of training dynamics and the neural tangent kernel (NTK). Our convergence theory successfully characterizes the effects of two key components in the DP training: the per-sample clipping (flat or layerwise) and the noise addition. Our analysis not only initiates a general principled framework to understand the DP deep learning with any network architecture and loss function, but also motivates a new clipping method -- the global clipping, that significantly improves the convergence while preserving the same privacy guarantee as the existing local clipping.
In terms of theoretical results, we establish the precise connection between the per-sample clipping and NTK matrix. We show that in the gradient flow, i.e., with infinitesimal learning rate, the noise level of DP optimizers does not affect the convergence. We prove that DP gradient descent (GD) with global clipping guarantees the monotone convergence to zero loss, which can be violated by the existing DP-GD with local clipping. Notably, our analysis framework easily extends to other optimizers, e.g., DP-Adam. Empirically speaking, DP optimizers equipped with global clipping perform strongly on a wide range of classification and regression tasks. In particular, our global clipping is surprisingly effective at learning calibrated classifiers, in contrast to the existing DP classifiers which are oftentimes over-confident and unreliable. Implementation-wise, the new clipping can be realized by adding one line of code into the Opacus library.

https://arxiv.org/abs/2106.07830




Other Videos By Google TechTalks


2021-10-12Near-Optimal Experimental Design for Networks: Independent Block Randomization
2021-10-06Greybeard Qualification (Linux Internals) part 1: Process Structure and IPC
2021-10-06Greybeard Qualification (Linux Internals) part 3: Memory Management
2021-10-06Greybeard Qualification (Linux Internals) part 2 Execution, Scheduling, Processes & Threads
2021-10-06Greybeard Qualification (Linux Internals) part 6: Networking & Building a Kernel
2021-10-06Greybeard Qualification (Linux Internals) part 5: Block Devices & File Systems
2021-10-06Greybeard Qualification (Linux Internals) part 4: Startup and Init
2021-09-30A Regret Analysis of Bilateral Trade
2021-09-29CoinPress: Practical Private Mean and Covariance Estimation
2021-09-29"I need a better description": An Investigation Into User Expectations For Differential Privacy
2021-09-29On the Convergence of Deep Learning with Differential Privacy
2021-09-29A Geometric View on Private Gradient-Based Optimization
2021-09-29BB84: Quantum Protected Cryptography
2021-09-29Fast and Memory Efficient Differentially Private-SGD via JL Projections
2021-09-29Leveraging Public Data for Practical Synthetic Data Generation
2021-07-13Efficient Exploration in Bayesian Optimization – Optimism and Beyond by Andreas Krause
2021-07-13Learning to Explore in Molecule Space by Yoshua Bengio
2021-07-13Resource Allocation in Multi-armed Bandits by Kirthevasan Kandasamy
2021-07-13Grey-box Bayesian Optimization by Peter Frazier
2021-06-10Is There a Mathematical Model of the Mind? (Panel Discussion)
2021-06-04Dataset Poisoning on the Industrial Scale