Panel: Large-scale neural platform models: Opportunities, concerns, and directions

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=a7G0no5KjfU



Duration: 46:27
1,004 views
30


Speakers:
Eric Horvitz, Chief Scientific Officer, Microsoft
Miles Brundage, Head of Policy Research, OpenAI
Yejin Choi, Professor, University of Washington / AI2
Percy Liang, Associate Professor, Standord University

Large-scale, pretrained neural models are driving significant research and development across multiple AI areas. They have played a major role in research efforts and have been at the root of leaps forward in capabilities in natural language processing, computer vision, and multimodal reasoning. Over the last five years, large-scale neural models have evolved into platforms where fixed large-scale “platform models” are adapted via fine-tuning to develop capabilities on specific tasks. Research continues, and we have much to learn. While there is excitement about demonstrated capabilities, the “models as platforms” paradigm is concurrently raising questions and framing discussions about a constellation of concerns. These include challenges with safety and responsibility in regard to the understandability of emergent behaviors, the potential for systems to generate offensive output, and malevolent uses of new capabilities. Other discussion focuses on challenges with the cost of building platform models and with the rise of have and have-nots, where only a few industry organizations can construct platform models. Microsoft Chief Scientific Officer Eric Horvitz will lead an expert panel on neural platform models discussing research directions, responsible practices, and directions forward on key concerns.

Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit




Other Videos By Microsoft Research


2022-01-24Research talk: Domain-specific pretraining for vertical search
2022-01-24Research talk: Is phrase retrieval all we need?
2022-01-24Live Q&A and Closing remarks: New future of work
2022-01-24Research talk: DeepXML: A deep extreme classification framework for recommending millions of items
2022-01-24Talk series: Developer productivity
2022-01-24Fireside Conversation Series: Building an equitable environment for hybrid work
2022-01-24Practical tips for productivity & wellbeing: Focusing without getting exhausted
2022-01-24Research talk: Attentive knowledge-aware graph neural networks for recommendation
2022-01-24Practical tips for productivity & wellbeing: Lessons from COVID-19 around time management
2022-01-24Tutorial, Research talk, and Q&A: ElectionGuard: Enabling voters to verify election integrity
2022-01-24Panel: Large-scale neural platform models: Opportunities, concerns, and directions
2022-01-20Unsupervised Speech Enhancement
2022-01-20Developing a Brain-Computer Interface Based on Visual Imagery
2022-01-04Panel: Theory Research in Big Data Era
2022-01-04Talk: Sequential Search Problems Beyond The Pandora Box Setting
2022-01-04Recap video of 2021 MSR Asia Theory Workshop (Short version)
2022-01-04Talk: The implicit bias of optimization algorithms in deep learning
2022-01-04Talk: Coresets for Clustering with Missing Values
2022-01-04MSR Asia Theory Center Introduction
2022-01-04Inauguration Ceremony of MSR Asia Theory Center Opening Speech from Tie-Yan Liu
2022-01-04Talk: Batch Online Learning and Decision



Tags:
deep learning
large-scale models
large-scale AI models
AI
artificial intelligence
microsoft research summit