Distributed Estimation with Multiple Samples per User: Sharp Rates and Phase Transition

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=F8OgSU0ijg4



Duration: 9:11
125 views
3


A Google TechTalk, presented by Ziteng Sun, Cornell University, at the 2021 Google Federated Learning and Analytics Workshop, Nov. 8-10, 2021.

For more information about the workshop: https://events.withgoogle.com/2021-workshop-on-federated-learning-and-analytics/#content




Other Videos By Google TechTalks


2022-02-08Statistical Heterogeneity in Federated Learning
2022-02-08Improved Information Theoretic Generalization Bounds for Distributed and Federated Learning
2022-02-08Tight Accounting in the Shuffle Model of Differential Privacy
2022-02-08Distributed Point Functions: Efficient Secure Aggregation and Beyond with Non-Colluding Servers
2022-02-08How to Turn Privacy ON and OFF and ON Again
2022-02-08Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
2022-02-08Secure Federated Learning on Wimpy Devices
2022-02-08Gaps between FL optimization theory and practice
2022-02-08Mistify: Automating DNN Model Porting for On-Device Inference at the Edge
2022-02-08Personalized Graph-Aided Online Federated Model Selection
2022-02-08Distributed Estimation with Multiple Samples per User: Sharp Rates and Phase Transition
2022-02-08Distributed neural network training via independent subnets
2022-02-08CaPC Learning: Confidential and Private Collaborative Learning
2022-02-08Locally Differentially Private Bayesian Inference
2022-02-08Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
2022-02-08SparseFed: Mitigation Model Poisoning Attacks in Federated Learning with Sparsification
2022-02-08Federated Multi-Task Learning under a Mixture of Distributions
2022-02-08Private Multi-Group Aggregation
2022-02-08Private Goodness-of-Fit: A Few Ideas Go a Long Way
2022-02-08Privacy Amplification by Decentralization
2022-02-08Experimenting w/ Local & Central Differential Privacy for Both Robustness & Privacy in Fed.Learning