An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=TrdevFK_am4



Duration: 29:56
285,202 views
7,951


#ai #research #transformers

Transformers are Ruining Convolutions. This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of the Vision Transformer (ViT), the reason why it works better and rant about why double-bline peer review is broken.

OUTLINE:
0:00 - Introduction
0:30 - Double-Blind Review is Broken
5:20 - Overview
6:55 - Transformers for Images
10:40 - Vision Transformer Architecture
16:30 - Experimental Results
18:45 - What does the Model Learn?
21:00 - Why Transformers are Ruining Everything
27:45 - Inductive Biases in Transformers
29:05 - Conclusion & Comments

Paper (Under Review): https://openreview.net/forum?id=YicbFdNTTy
Arxiv version: https://arxiv.org/abs/2010.11929

BiT Paper: https://arxiv.org/pdf/1912.11370.pdf
ImageNet-ReaL Paper: https://arxiv.org/abs/2006.07159

My Video on BiT (Big Transfer): https://youtu.be/k1GOF2jmX7c
My Video on Transformers: https://youtu.be/iDulhoQ2pro
My Video on BERT: https://youtu.be/-9evrZnBorM
My Video on ResNets: https://youtu.be/GWt6Fu05voI


Abstract: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer can perform very well on image classification tasks when applied directly to sequences of image patches. When pre-trained on large amounts of data and transferred to multiple recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc), Vision Transformer attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.

Authors: Anonymous / Under Review

Errata:
- Patches are not flattened, but vectorized

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2020-12-132M All-In into $5 Pot! WWYD? Daniel Negreanu's No-Limit Hold'em Challenge! (Poker Hand Analysis)
2020-12-01DeepMind's AlphaFold 2 Explained! AI Breakthrough in Protein Folding! What we know (& what we don't)
2020-11-29Predictive Coding Approximates Backprop along Arbitrary Computation Graphs (Paper Explained)
2020-11-22Fourier Neural Operator for Parametric Partial Differential Equations (Paper Explained)
2020-11-15[News] Soccer AI FAILS and mixes up ball and referee's bald head.
2020-11-10Underspecification Presents Challenges for Credibility in Modern Machine Learning (Paper Explained)
2020-11-02Language Models are Open Knowledge Graphs (Paper Explained)
2020-10-26Rethinking Attention with Performers (Paper Explained)
2020-10-17LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained)
2020-10-11Descending through a Crowded Valley -- Benchmarking Deep Learning Optimizers (Paper Explained)
2020-10-04An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)
2020-10-03Training more effective learned optimizers, and using them to train themselves (Paper Explained)
2020-09-18The Hardware Lottery (Paper Explained)
2020-09-13Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)
2020-09-07Learning to summarize from human feedback (Paper Explained)
2020-09-02Self-classifying MNIST Digits (Paper Explained)
2020-08-28Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
2020-08-26Radioactive data: tracing through training (Paper Explained)
2020-08-23Fast reinforcement learning with generalized policy updates (Paper Explained)
2020-08-20What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study (Paper Explained)
2020-08-18[Rant] REVIEWER #2: How Peer Review is FAILING in Machine Learning



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
attention mechanism
convolutional neural network
data science
cnn
transformer
attention is all you need
vaswani
beyer
google
google brain
google research
tpu
tpu v3
iclr
iclr 2021
peer review
anonymous
karpathy
andrej karpathy
twitter
review
under submission
big transfer
bit
vit
vision transformer
visual transformer
transformer images
transformer computer vision