Knowledge Distillation as Semiparametric Inference

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=dEE3-g_8dWo



Duration: 50:40
1,696 views
49


More accurate machine learning models often demand more computation and memory at test time, making them difficult to deploy on CPU- or memory-constrained devices. Knowledge distillation alleviates this burden by training a less expensive student model to mimic the expensive teacher model while maintaining most of the original accuracy. To explain and enhance this phenomenon, we cast knowledge distillation as a semiparametric inference problem with the optimal student model as the target, the unknown Bayes class probabilities as nuisance, and the teacher probabilities as a plug-in nuisance estimate. By adapting modern semiparametric tools, we derive new guarantees for the prediction error of standard distillation and develop two enhancements—cross-fitting and loss correction—to mitigate the impact of teacher overfitting and underfitting on student performance. We validate our findings empirically on both tabular and image data and observe consistent improvements from our knowledge distillation enhancements.

Lester is a statistical machine learning researcher at Microsoft Research New England and an adjunct professor at Stanford University. He received his Ph.D. in Computer Science (2012), his M.A. in Statistics (2011) from UC Berkeley, and his B.S.E. in Computer Science (2007) from Princeton University. Before joining Microsoft, Lester spent three wonderful years as an assistant professor of Statistics and, by courtesy, Computer Science at Stanford and one as a Simons Math+X postdoctoral fellow, working with Emmanuel Candes. Lester’s Ph.D. advisor was Mike Jordan, and his undergraduate research advisors were Maria Klawe and David Walker. He got his first taste of research at the Research Science Institute and learned to think deeply of simple things at the Ross Program. Lester’s current research interests include statistical machine learning, scalable algorithms, high-dimensional statistics, approximate inference, and probability. Lately, he’s been developing and analyzing scalable learning algorithms for healthcare, climate forecasting, approximate posterior inference, high-energy physics, recommender systems, and the social good.

Learn more about the 2020-2021 Directions in ML: AutoML and Automating Algorithms virtual speaker series: https://www.microsoft.com/en-us/research/event/directions-in-ml/




Other Videos By Microsoft Research


2021-05-25Introducing Developer Velocity Lab to improve developers’ work and well-being
2021-05-24Machine Learning and Fairness
2021-05-24Post-quantum cryptography: Supersingular isogenies for beginners
2021-05-24Quantum-safe cryptography: Securing today’s data against tomorrow’s computers
2021-05-20Failures of imagination: Discovering and measuring harms in language technologies
2021-05-13Cities Unlocked – Introducing 3D Sound for Greater Mobility and Independence
2021-05-13The Journey to Microsoft Soundscape
2021-05-13Microsoft Soundscape - Lighting up the World with Sound
2021-05-12Platform for Situated Intelligence Workshop | Day 1
2021-05-12Platform for Situated Intelligence Workshop | Day 2
2021-05-03Knowledge Distillation as Semiparametric Inference
2021-05-03Better design, implementation, and testing of async systems with Coyote
2021-05-03Research @Microsoft Research India: interdisciplinary and impactful with Dr. Sriram Rajamani
2021-04-29Virtual Lake Nona Impact Forum “Health Innovation in the New Reality”
2021-04-28Sound Capture and Speech Enhancement for Communication and Distant Speech Recognition
2021-04-27Virtual Lake Nona Impact Forum “Health Innovation in the New Reality”
2021-04-26FastNeRF: High-Fidelity Neural Rendering at 200FPS [Condensed]
2021-04-21Research for Industries (RFI) Lecture Series: Warren Powell
2021-04-21Research for Industries (RFI) Lecture Series: Andreas Haeberlen
2021-04-13Discovering hidden connections in art with deep, interpretable visual analogies
2021-04-13ZeRO & Fastest BERT: Increasing the scale and speed of deep learning training in DeepSpeed