Research Talk: Enhancing the robustness of massive language models via invariant risk minimization

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=FfinhAXLP1o



Duration: 0:00
728 views
24


Speaker: Robert West, Tenure-Track Assistant Professor, EPFL

Despite the dramatic recent progress in natural language processing (NLP) afforded by large pretrained language models, important limitations remain. A growing body of work demonstrates that such models are easily fooled by adversarial attacks and have poor out-of-distribution generalization, as they tend to learn spurious, non-causal correlations. This talk explores how to reduce the impact of spurious correlations in large language models based on the so-called invariance principle, which states that only relationships invariant across training environments should be learned. It includes data showing that language models trained via invariant risk minimization (IRM), rather than the traditional expected risk minimization, achieve better out-of-distribution generalization.

Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit




Other Videos By Microsoft Research


2022-02-08Opening remarks: Empowering software developers and mathematicians with next-generation AI
2022-02-08Closing Remarks: Health and Life Sciences - Delivery
2022-02-08Tutorial: Translating real-world data into evidence
2022-02-08Future of technology in combatting disease and disparities in treatment: A cardiovascular case study
2022-02-08Research talks: Showcasing health equity, access, and resilience collaborations
2022-02-08Opening remarks: Health & Life Sciences - Discovery
2022-02-08The role of tech in decreasing health inequities, improving access, and strengthening resilience
2022-02-08Opening remarks: Health and Life Sciences - Delivery
2022-02-08Closing remarks: Causal Machine Learning
2022-02-08Demo: Enabling end-to-end causal inference at scale
2022-02-08Research Talk: Enhancing the robustness of massive language models via invariant risk minimization
2022-02-08Research talk: Post-contextual-bandit inference
2022-02-08Research talk: Causal ML and fairness
2022-02-08Research talk: Causal learning: Discovering causal relations for out-of-distribution generalization
2022-02-08Research talk: Can causal learning improve the privacy of ML models?
2022-02-08Research talk: Causal ML and business
2022-02-08Research talk: Challenges and opportunities in causal machine learning
2022-02-08Opening remarks: Causal Machine Learning
2022-02-08Closing remarks: The Future of Privacy and Security
2022-02-08Demo: Generating formally proven low-level parsers with EverParse
2022-02-08Demo: EverParse: Automatic generation of formally verified secure parsers for cloud integrity