DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)

DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=h3ij3F3cPIk



Duration: 39:13
96,834 views
2,996


#dino #facebook #selfsupervised

Self-Supervised Learning is the final frontier in Representation Learning: Getting useful features without any labels. Facebook AI's new system, DINO, combines advances in Self-Supervised Learning for Computer Vision with the new Vision Transformer (ViT) architecture and achieves impressive results without any labels. Attention maps can be directly interpreted as segmentation maps, and the obtained representations can be used for image retrieval and zero-shot k-nearest neighbor classifiers (KNNs).

OUTLINE:
0:00 - Intro & Overview
6:20 - Vision Transformers
9:20 - Self-Supervised Learning for Images
13:30 - Self-Distillation
15:20 - Building the teacher from the student by moving average
16:45 - DINO Pseudocode
23:10 - Why Cross-Entropy Loss?
28:20 - Experimental Results
33:40 - My Hypothesis why this works
38:45 - Conclusion & Comments

Paper: https://arxiv.org/abs/2104.14294
Blog: https://ai.facebook.com/blog/dino-paws-computer-vision-with-self-supervised-transformers-and-10x-more-efficient-training
Code: https://github.com/facebookresearch/dino

My Video on ViT: https://youtu.be/TrdevFK_am4
My Video on BYOL: https://youtu.be/YPfUiOMYOEE

Abstract:
In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.

Authors: Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-05-29Fast and Slow Learning of Recurrent Independent Mechanisms (Machine Learning Paper Explained)
2021-05-26[ML News] DeepMind fails to get independence from Google
2021-05-24Expire-Span: Not All Memories are Created Equal: Learning to Forget by Expiring (Paper Explained)
2021-05-21FNet: Mixing Tokens with Fourier Transforms (Machine Learning Research Paper Explained)
2021-05-18AI made this music video | What happens when OpenAI's CLIP meets BigGAN?
2021-05-15DDPM - Diffusion Models Beat GANs on Image Synthesis (Machine Learning Research Paper Explained)
2021-05-11Research Conference ICML drops their acceptance rate | Area Chairs instructed to be more picky
2021-05-08Involution: Inverting the Inherence of Convolution for Visual Recognition (Research Paper Explained)
2021-05-06MLP-Mixer: An all-MLP Architecture for Vision (Machine Learning Research Paper Explained)
2021-05-04I'm out of Academia
2021-05-01DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)
2021-04-30Why AI is Harder Than We Think (Machine Learning Research Paper Explained)
2021-04-27I COOKED A RECIPE MADE BY A.I. | Cooking with GPT-3 (Don't try this at home)
2021-04-19NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained)
2021-04-14I BUILT A NEURAL NETWORK IN MINECRAFT | Analog Redstone Network w/ Backprop & Optimizer (NO MODS)
2021-04-11DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning
2021-04-07PAIR AI Explorables | Is the problem in the data? Examples on Fairness, Diversity, and Bias.
2021-03-30Machine Learning PhD Survival Guide 2021 | Advice on Topic Selection, Papers, Conferences & more!
2021-03-23Is Google Translate Sexist? Gender Stereotypes in Statistical Machine Translation
2021-03-22Perceiver: General Perception with Iterative Attention (Google DeepMind Research Paper Explained)
2021-03-16Pretrained Transformers as Universal Computation Engines (Machine Learning Research Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
deep learning tutorial
what is deep learning
introduction to deep learning
facebook
facebook ai
fair
byol
swav
self supervised learning
unsupervised feature learning
unsupervised machine learning
feature engineering
stop gradient
dino
self distillation
self-distillation
segmentation maps
visual transformer
visual transformer self supervised
imagenet