Learning Commonsense Understanding through Language and Vision

Learning Commonsense Understanding through Language and Vision

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=QYoG-usbD_0



Duration: 1:00:42
1,146 views
22


As humans, we parse language and visual scenes -- often together -- into a rich understanding of what is going on in the world. Given even a still image or a sentence describing an event, like "my friends eating at a restaurant", we can infer who's doing what, where they're at, and what might happen next. Though existing models seem strong at tasks involving language or vision, these models often struggle at combining these two modalities into such a unified commonsense understanding.

In this talk, I will cover two recent works that seek to help bridge this gap. First, I'll introduce a model named PIGLeT that learns physical commonsense understanding by interacting with the world in simulation, and uses this knowledge to ground language. PIGLeT learns linguistic form and meaning, together, and outperforms text-to-text only models that are orders of magnitude larger. I'll then introduce a model named MERLOT, which learns multimodal script knowledge by watching millions of YouTube videos with transcribed speech. MERLOT learns the layered inferences that go beyond recognition at the level of individual scenes, and towards cognition-level reasoning that understands what is happening globally over time; it gets SOTA performance on 12 vision-and-language datasets.

I'll conclude with a sketch of future directions for how we can better learn multimodal commonsense understanding, as well as on the social impacts of this work.

Speaker: Rowan Zellers, University of Washington

MSR Deep Learning team: https://www.microsoft.com/en-us/research/group/deep-learning-group/




Other Videos By Microsoft Research


2021-10-05Safe Real-World Autonomy in Uncertain and Unstructured Environments
2021-10-05Women of Color and the Digital Labor of Repair
2021-10-01Fake It Till You Make It: Face Analysis In The Wild Using Synthetic Data Alone
2021-09-23ALIGN: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
2021-09-23Zero-Shot Detection via Vision and Language Knowledge Distillation
2021-09-17Three Explorations on Pre-Training: an Analysis, an Approach, and an Architecture
2021-09-16Visual Recognition beyond Appearances, and its Robotic Applications
2021-09-16A Truly Unbiased Model
2021-09-16Visual question answering & reasoning over vision & language: Beyond limits of statistical learning?
2021-09-15MDETR: Modulated Detection for End-to-End Multi-Modal Understanding
2021-09-15Learning Commonsense Understanding through Language and Vision
2021-09-15Tightly Connecting Vision and Language
2021-09-15Learning from Unlabeled Videos for Recognition, Prediction, and Control
2021-09-15Grounded Visual Generation
2021-08-25The New Jim Code: Reimagining the Default Settings of Technology & Society
2021-08-19A mechatronic shape display based on auxetic materials
2021-08-16Dependable IoT- Making data from IoT devices dependable and trustworthy for good decision making
2021-08-11Lookout System: National Television Commercial (1998)
2021-08-06Create human-centered AI with the Human-AI eXperience (HAX) Toolkit webinar
2021-08-04Computing Technology as Racial Infrastructure: A History of the Present & Blueprint for Black Future
2021-07-27Urban Air Chicago