ALIGN: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision

ALIGN: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=kejLJ0kbIGM



Duration: 57:52
1,724 views
41


Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.

Speaker: Yinfei Yang, Google

Microsoft Research Deep Learning team: https://www.microsoft.com/en-us/research/group/deep-learning-group/




Other Videos By Microsoft Research


2021-10-19Precision agriculture uses computer science to make farms more efficient and reduce climate change
2021-10-19Working at Microsoft Research Cambridge
2021-10-14Accelerating AI Innovation by Optimizing Infrastructure. With Dr. Muthian Sivathanu
2021-10-11In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures
2021-10-10HapticBots: Distributed Encountered-type Haptics for VR with Multiple Shape-changing Mobile Robots
2021-10-10X-Rings: A Hand-mounted 360 Degree Shape Display for Grasping in Virtual Reality [UIST 2021]
2021-10-07Convergence between CV and NLP Modeling and Learning
2021-10-05Safe Real-World Autonomy in Uncertain and Unstructured Environments
2021-10-05Women of Color and the Digital Labor of Repair
2021-10-01Fake It Till You Make It: Face Analysis In The Wild Using Synthetic Data Alone
2021-09-23ALIGN: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
2021-09-23Zero-Shot Detection via Vision and Language Knowledge Distillation
2021-09-17Three Explorations on Pre-Training: an Analysis, an Approach, and an Architecture
2021-09-16Visual Recognition beyond Appearances, and its Robotic Applications
2021-09-16A Truly Unbiased Model
2021-09-16Visual question answering & reasoning over vision & language: Beyond limits of statistical learning?
2021-09-15MDETR: Modulated Detection for End-to-End Multi-Modal Understanding
2021-09-15Learning Commonsense Understanding through Language and Vision
2021-09-15Tightly Connecting Vision and Language
2021-09-15Learning from Unlabeled Videos for Recognition, Prediction, and Control
2021-09-15Grounded Visual Generation