Zero-Shot Detection via Vision and Language Knowledge Distillation

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=aA0r1M_NWhs



Duration: 1:09:37
3,478 views
80


In this talk, I will introduce our recent work about ViLD, a training method via Vision and Language knowledge Distillation. We distill the knowledge from a pre-trained zero-shot image classification model (e.g., CLIP) into a two-stage detector (e.g., Mask R-CNN). Our method aligns the region embeddings in the detector to the text and image embeddings inferred by the pre-trained model. We use the text embeddings as the detection classifier, obtained by feeding category names into the pre-trained text encoder. We then minimize the distance between the region embeddings and image embeddings, obtained by feeding region proposals into the pre-trained image encoder. During inference, we include text embeddings of novel categories into the detection classifier for zero-shot detection. We benchmark the performance on LVISv1.0 dataset by holding out all rare categories as novel categories. ViLD obtains 16.1 mask APr with a Mask R-CNN (ResNet-50 FPN) for zero-shot detection, outperforming the supervised counterpart by 3.8. The model can directly transfer to other datasets, achieving 72.2 AP50, 36.6 AP and 11.8 AP on PASCAL VOC, COCO and Objects365, respectively.

Speaker: Yin Cui, Google

Microsoft Research Deep Learning team: https://www.microsoft.com/en-us/research/group/deep-learning-group/




Other Videos By Microsoft Research


2021-10-19Working at Microsoft Research Cambridge
2021-10-14Accelerating AI Innovation by Optimizing Infrastructure. With Dr. Muthian Sivathanu
2021-10-11In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures
2021-10-10HapticBots: Distributed Encountered-type Haptics for VR with Multiple Shape-changing Mobile Robots
2021-10-10X-Rings: A Hand-mounted 360 Degree Shape Display for Grasping in Virtual Reality [UIST 2021]
2021-10-07Convergence between CV and NLP Modeling and Learning
2021-10-05Safe Real-World Autonomy in Uncertain and Unstructured Environments
2021-10-05Women of Color and the Digital Labor of Repair
2021-10-01Fake It Till You Make It: Face Analysis In The Wild Using Synthetic Data Alone
2021-09-23ALIGN: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
2021-09-23Zero-Shot Detection via Vision and Language Knowledge Distillation
2021-09-17Three Explorations on Pre-Training: an Analysis, an Approach, and an Architecture
2021-09-16Visual Recognition beyond Appearances, and its Robotic Applications
2021-09-16A Truly Unbiased Model
2021-09-16Visual question answering & reasoning over vision & language: Beyond limits of statistical learning?
2021-09-15MDETR: Modulated Detection for End-to-End Multi-Modal Understanding
2021-09-15Learning Commonsense Understanding through Language and Vision
2021-09-15Tightly Connecting Vision and Language
2021-09-15Learning from Unlabeled Videos for Recognition, Prediction, and Control
2021-09-15Grounded Visual Generation
2021-08-25The New Jim Code: Reimagining the Default Settings of Technology & Society