AI advances in image captioning: Describing images as well as people do

Subscribers:
351,000
Published on ● Video Link: https://www.youtube.com/watch?v=QNesnXfyYq8



Duration: 1:03:34
3,431 views
73


Image captioning is an interesting problem in the intersection between computer vision and natural language processing, and it has attracted great attention from their respective research communities. Recent image captioning models have achieved impressive results on the tasks where large amounts of paired image-caption training data is available. However, they generalize poorly to images in the wild, where there are a wide variety of visual objects that are unseen in the caption corpora for training. This raises the challenge of Novel Object Captioning (NOC), that is, generating captions to describe novel objects unseen in paired image-caption training data, which is especially pertinent in real-world applications.

This webinar will focus on some of the recent vision-language pretraining (VLP) approaches for image captioning. We will cover our latest approaches, including object-semantics aligned pretraining (OSCAR) and visual-vocabulary pretraining (VIVO). We will also discuss their key principles and how we address the core challenges in image caption generation. Join us to learn how our discovery leads to a new image captioning framework that achieves state-of-the-art performance on the nocaps benchmark (developed to evaluate NOC at scale) and surpasses human CIDEr scores on nocaps for the first time.

Visual-vocabulary pretraining (VIVO) conducts pretraining with vision data only. As the method does not need paired image-caption data, it opens the possibility of leveraging large amounts of images, paired with either human-labeled or machine-generated tags. By using VIVO pretraining, the performance of the captioning model, especially on novel objects, has been substantially improved.

What you’ll learn:

■ How latest VLP approaches help to improve captioning performance by pretraining on large-scale image-text pairs, then fine-tuning on task-specific small data.
■ How VIVO pretraining is conducted in the absence of image-text pairs, leading to state-of-the-art performance on NOC.
■ How visual-text alignment is learned during VLP and significantly contributes to the downstream vision-language tasks.
■ How to use our open-source model and code in your research and how to use our Azure Cognitive Services cloud API for your own development.

𝐑𝐞𝐬𝐨𝐮𝐫𝐜𝐞 𝐥𝐢𝐬𝐭:

■ Azure Florence Project page: https://www.microsoft.com/en-us/research/project/azure-florence-vision-and-language
■ Oscar on Github: https://github.com/microsoft/Oscar
■ Oscar Publication: https://www.microsoft.com/en-us/research/publication/oscar-object-semantics-aligned-pre-training-for-vision-language-tasks
■ VIVO Publication: https://www.microsoft.com/en-us/research/publication/vivo-surpassing-human-performance-in-novel-object-captioning-with-visual-vocabulary-pre-training
■ Novel object captioning surpasses human performance on benchmarks (MSR Blog): https://www.microsoft.com/en-us/research/blog/novel-object-captioning-surpasses-human-performance-on-benchmarks
■ Objects are the secret key to revealing the world between vision and language (MSR Blog): https://www.microsoft.com/en-us/research/blog/objects-are-the-secret-key-to-revealing-the-world-between-vision-and-language
■ Azure AI, describes images as well as people do (AI Blog): https://blogs.microsoft.com/ai/azure-image-captioning
■ Lijuan Wang (Researcher Profile): https://www.microsoft.com/en-us/research/people/lijuanw
■ Xiaowei Hu (Researcher Profile): https://www.microsoft.com/en-us/research/people/xiaowh

*This on-demand webinar features a previously recorded Q&A session and open captioning.

Explore more Microsoft Research webinars: https://aka.ms/msrwebinars




Other Videos By Microsoft Research


2021-03-30Building multimodal, integrative AI systems with Platform for Situated Intelligence
2021-03-29From player to creator: Designing video games on gaming handhelds with Microsoft TileCode webinar
2021-03-29Camera-based non-contact health sensing
2021-03-29Foundations of causal inference and its impacts on machine learning webinar
2021-03-29Avatars: Finding a sense of self and others in the virtual world
2021-03-25In pursuit of responsible AI: Bringing principles to practice
2021-03-25Fairness-related harms in AI systems: Examples, assessment, and mitigation
2021-03-25Enhancing mobile work and productivity with virtual reality
2021-03-23Mixed reality and robotics: Unlocking more intuitive human-machine collaboration
2021-03-23Project InnerEye: Augmenting cancer radiotherapy workflows with deep learning and open source
2021-03-23AI advances in image captioning: Describing images as well as people do
2021-03-17Reinforcement learning in Minecraft: Challenges and opportunities in multiplayer games
2021-03-17Microsoft Vision Model ResNet-50: Pretrained vision model built with web-scale data
2021-03-11A Tale of Two Cities: Software Developers in Practice During the COVID-19 Pandemic
2021-03-08Directions in ML: Taking Advantage of Randomness in Expensive Optimization Problems
2021-03-08AI and Gaming Research Summit 2021 - Fireside chat with Peter Lee and Kareem Choudhry
2021-03-08AI and Gaming Research Summit 2021 - Computational Creativity (Day 1 Track 2.2)
2021-03-08AI and Gaming Research Summit 2021 - Computational Creativity (Day 1 Track 2.1)
2021-03-08AI and Gaming Research Summit 2021 - AI Agents (Day 1 Track 1.2)
2021-03-08AI and Gaming Research Summit 2021 - AI Agents (Day 1 Track 1.1)
2021-03-08AI and Gaming Research Summit 2021 – Welcome and Microsoft Plenary with Phil Spencer, Katja Hofmann



Tags:
AI advances
image captioning
AI image captioning
AI image description
Microsoft Research
webinar
VIVO
Azure Florence