Fastformer: Additive Attention Can Be All You Need (Machine Learning Research Paper Explained)

Fastformer: Additive Attention Can Be All You Need (Machine Learning Research Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=qgUegkefocg



Duration: 35:30
26,986 views
874


#attention #transformer #fastformer

Transformers have become the dominant model class in the last few years for large data, but their quadratic complexity in terms of sequence length has plagued them until now. Fastformer claims to be the fastest and most performant linear attention variant, able to consume long contexts at once. This is achieved by a combination of additive attention and elementwise products. While initial results look promising, I have my reservations...

OUTLINE:
0:00 - Intro & Outline
2:15 - Fastformer description
5:20 - Baseline: Classic Attention
10:00 - Fastformer architecture
12:50 - Additive Attention
18:05 - Query-Key element-wise multiplication
21:35 - Redundant modules in Fastformer
25:00 - Problems with the architecture
27:30 - Is this even attention?
32:20 - Experimental Results
34:50 - Conclusion & Comments

Paper: https://arxiv.org/abs/2108.09084

Abstract:
Transformer is a powerful model for text understanding. However, it is inefficient due to its quadratic complexity to input sequence length. Although there are many methods on Transformer acceleration, they are still either inefficient on long sequences or not effective enough. In this paper, we propose Fastformer, which is an efficient Transformer model based on additive attention. In Fastformer, instead of modeling the pair-wise interactions between tokens, we first use additive attention mechanism to model global contexts, and then further transform each token representation based on its interaction with global context representations. In this way, Fastformer can achieve effective context modeling with linear complexity. Extensive experiments on five datasets show that Fastformer is much more efficient than many existing Transformer models and can meanwhile achieve comparable or even better long text modeling performance.

Authors: Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-09-24[ML News] New ImageNet SOTA | Uber's H3 hexagonal coordinate system | New text-image-pair dataset
2021-09-21Does GPT-3 lie? - Misinformation and fear-mongering around the TruthfulQA dataset
2021-09-20Topographic VAEs learn Equivariant Capsules (Machine Learning Research Paper Explained)
2021-09-16[ML News] Roomba Avoids Poop | Textless NLP | TikTok Algorithm Secrets | New Schmidhuber Blog
2021-09-14Celebrating 100k Subscribers! (w/ Channel Statistics)
2021-09-10[ML News] AI predicts race from X-Ray | Google kills HealthStreams | Boosting Search with MuZero
2021-09-06∞-former: Infinite Memory Transformer (aka Infty-Former / Infinity-Former, Research Paper Explained)
2021-09-03[ML News] Blind Chess AI Competition | Graph NNs for traffic | AI gift suggestions
2021-09-02ALiBi - Train Short, Test Long: Attention with linear biases enables input length extrapolation
2021-08-27[ML News] Stanford HAI coins Foundation Models & High-profile case of plagiarism uncovered
2021-08-26Fastformer: Additive Attention Can Be All You Need (Machine Learning Research Paper Explained)
2021-08-23PonderNet: Learning to Ponder (Machine Learning Research Paper Explained)
2021-08-19NeuralHash is BROKEN - How to evade Apple's detection & craft hash collisions (w/ Open Source Code)
2021-08-18[ML News] Nvidia renders CEO | Jurassic-1 larger than GPT-3 | Tortured Phrases reveal Plagiarism
2021-08-16How Apple scans your phone (and how to evade it) - NeuralHash CSAM Detection Algorithm Explained
2021-08-13[ML NEWS] Apple scans your phone | Master Faces beat face recognition | WALL-E is real
2021-08-06[ML News] AI-generated patent approved | Germany gets an analog to OpenAI | ML cheats video games
2021-08-02[ML News] MMO Game destroys GPUs | OpenAI quits Robotics | Today w/ guest host Sanyam Bhutani
2021-07-15[ML News] Facebook AI adapting robots | Baidu autonomous excavators | Happy Birthday EleutherAI
2021-07-11I'm taking a break
2021-07-08[ML News] GitHub Copilot - Copyright, GPL, Patents & more | Brickit LEGO app | Distill goes on break



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
attention mechanism
attention is all you need
fastformer
fast former
nlp
natural language processing
linear attention
linear transformer
query key value
additive attention
elementwise product
fast transformer
faster transformer
transformer memory
attention quadratic memory
fastformer explained