Research talk: Differentially private fine-tuning of large language models

Subscribers:
343,000
Published on ● Video Link: https://www.youtube.com/watch?v=Qe8WsNiL6RA



Duration: 36:05
457 views
6


We have come a long way in terms of protecting privacy when training ML models, particularly with large language models. We recently demonstrated that using differentially private stochastic gradient descent (DP-SGD) to fine-tune very large language models, such as GPT-3, is not only feasible but shows very promising results with respect to the privacy-utility tradeoff. In this talk, we highlight the challenges we have overcome over the past year and the opportunities our research enables for a range of product applications.

#MSFTResearchSummit

See related sessions in this track: https://www.microsoft.com/en-us/research/video/research-talk-differentially-private-fine-tuning-of-large-language-models/

Learn more about the 2022 Microsoft Research Summit: https://www.microsoft.com/en-us/research/event/microsoft-research-summit-2022/




Other Videos By Microsoft Research


2022-10-27Lightning talks: AI in healthcare
2022-10-27Lightning talks: AI in life sciences
2022-10-27Plenary: Emerging foundations for planet-scale computing
2022-10-27Research talk: Correct computational law and civil procedure with the Lean Proof Assistant
2022-10-27Panel discussion: From science to policy: Decision-making under uncertainty for global emergencies
2022-10-27Panel discussion: Towards climate-smart cities: IoT networks for air pollution sensing
2022-10-27Lightning talks: Sustainably nourishing the world
2022-10-27Panel discussion and research talk: Computational advances in climate risk assessment
2022-10-27Keynote with guests: Accelerating precision health through scientific innovation and discovery
2022-10-27Panel discussion: Ambient clinical intelligence the next frontier of AI
2022-10-27Research talk: Differentially private fine-tuning of large language models
2022-10-27Lightning talks: Responsible AI: The challenge of big models
2022-10-27Research talk: Cloud Intelligence/AIOps – Infusing AI into cloud computing
2022-10-27Panel discussion: Emerging computing technologies in academia and industry
2022-10-27Keynote with guests: Toward AI that empowers more people more of the time
2022-10-27Lightning talks: Training and inference efficiency
2022-10-27Lightning talks: Skills acquisition and new capabilities
2022-10-27Lightning talks: Aligning models with human intent
2022-10-27Fireside chat: What’s next in large-scale AI
2022-10-27Plenary: Emerging foundations for planet-scale computing [ASL version]
2022-10-27Panel discussion: From AI research to application in AAA-games



Tags:
Algorithms
Artificial intelligence
Building Trust
Huishuai Zhang
Melissa Chase
Research talk: Differentially private fine-tuning of large language models | T401
microsoft research summit 2022
ms research summit
msft resarch summit 2022
msft summit
msft summit 2022
research summit
summit 2022