MDETR: Modulated Detection for End-to-End Multi-Modal Understanding

MDETR: Modulated Detection for End-to-End Multi-Modal Understanding

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=QM07aZaSFak



Duration: 1:13:27
2,065 views
39


Multi-modal reasoning systems rely on a pre-trained object detector to extract regions of interest from the image. However, this crucial module is typically used as a black box, trained independently of the downstream task and on a fixed vocabulary of objects and attributes. This makes it challenging for such systems to capture the long tail of visual concepts expressed in free form text. In this paper we propose MDETR, an end-to-end modulated detector that detects objects in an image conditioned on a raw text query, like a caption or a question. We use a transformer-based architecture to reason jointly over text and image by fusing the two modalities at an early stage of the model. We pre-train the network on 1.3M text-image pairs, mined from pre-existing multi-modal datasets having explicit alignment between phrases in text and objects in the image. We then fine-tune on several downstream tasks such as phrase grounding, referring expression comprehension and segmentation, achieving state-of-the-art results on popular benchmarks. We also investigate the utility of our model as an object detector on a given label set when fine-tuned in a few-shot setting. We show that our pre-training approach provides a way to handle the long tail of object categories which have very few labelled instances. Our approach can be easily extended for visual question answering, achieving competitive performance on GQA and CLEVR.

Speaker: Aishwarya Kamath, New York University's Center for Data Science

Microsoft Research Deep Learning team: https://www.microsoft.com/en-us/research/group/deep-learning-group/




Other Videos By Microsoft Research


2021-10-07Convergence between CV and NLP Modeling and Learning
2021-10-05Safe Real-World Autonomy in Uncertain and Unstructured Environments
2021-10-05Women of Color and the Digital Labor of Repair
2021-10-01Fake It Till You Make It: Face Analysis In The Wild Using Synthetic Data Alone
2021-09-23ALIGN: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
2021-09-23Zero-Shot Detection via Vision and Language Knowledge Distillation
2021-09-17Three Explorations on Pre-Training: an Analysis, an Approach, and an Architecture
2021-09-16Visual Recognition beyond Appearances, and its Robotic Applications
2021-09-16A Truly Unbiased Model
2021-09-16Visual question answering & reasoning over vision & language: Beyond limits of statistical learning?
2021-09-15MDETR: Modulated Detection for End-to-End Multi-Modal Understanding
2021-09-15Learning Commonsense Understanding through Language and Vision
2021-09-15Tightly Connecting Vision and Language
2021-09-15Learning from Unlabeled Videos for Recognition, Prediction, and Control
2021-09-15Grounded Visual Generation
2021-08-25The New Jim Code: Reimagining the Default Settings of Technology & Society
2021-08-19A mechatronic shape display based on auxetic materials
2021-08-16Dependable IoT- Making data from IoT devices dependable and trustworthy for good decision making
2021-08-11Lookout System: National Television Commercial (1998)
2021-08-06Create human-centered AI with the Human-AI eXperience (HAX) Toolkit webinar
2021-08-04Computing Technology as Racial Infrastructure: A History of the Present & Blueprint for Black Future