Involution: Inverting the Inherence of Convolution for Visual Recognition (Research Paper Explained)

Involution: Inverting the Inherence of Convolution for Visual Recognition (Research Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=pH2jZun8MoY



Duration: 30:54
23,563 views
836


#involution #computervision #attention

Convolutional Neural Networks (CNNs) have dominated computer vision for almost a decade by applying two fundamental principles: Spatial agnosticism and channel-specific computations. Involution aims to invert these principles and presents a spatial-specific computation, which is also channel-agnostic. The resulting Involution Operator and RedNet architecture are a compromise between classic Convolutions and the newer Local Self-Attention architectures and perform favorably in terms of computation accuracy tradeoff when compared to either.

OUTLINE:
0:00 - Intro & Overview
3:00 - Principles of Convolution
10:50 - Towards spatial-specific computations
17:00 - The Involution Operator
20:00 - Comparison to Self-Attention
25:15 - Experimental Results
30:30 - Comments & Conclusion

Paper: https://arxiv.org/abs/2103.06255
Code: https://github.com/d-li14/involution

Abstract:
Convolution has been the core ingredient of modern neural networks, triggering the surge of deep learning in vision. In this work, we rethink the inherent principles of standard convolution for vision tasks, specifically spatial-agnostic and channel-specific. Instead, we present a novel atomic operation for deep neural networks by inverting the aforementioned design principles of convolution, coined as involution. We additionally demystify the recent popular self-attention operator and subsume it into our involution family as an over-complicated instantiation. The proposed involution operator could be leveraged as fundamental bricks to build the new generation of neural networks for visual recognition, powering different deep learning models on several prevalent benchmarks, including ImageNet classification, COCO detection and segmentation, together with Cityscapes segmentation. Our involution-based models improve the performance of convolutional baselines using ResNet-50 by up to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU absolutely while compressing the computational cost to 66%, 65%, 72%, and 57% on the above benchmarks, respectively. Code and pre-trained models for all the tasks are available at this https URL.

Authors: Duo Li, Jie Hu, Changhu Wang, Xiangtai Li, Qi She, Lei Zhu, Tong Zhang, Qifeng Chen

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-06-02[ML News] Anthropic raises $124M, ML execs clueless, collusion rings, ELIZA source discovered & more
2021-05-31Reward Is Enough (Machine Learning Research Paper Explained)
2021-05-30[Rant] Can AI read your emotions? (No, but ...)
2021-05-29Fast and Slow Learning of Recurrent Independent Mechanisms (Machine Learning Paper Explained)
2021-05-26[ML News] DeepMind fails to get independence from Google
2021-05-24Expire-Span: Not All Memories are Created Equal: Learning to Forget by Expiring (Paper Explained)
2021-05-21FNet: Mixing Tokens with Fourier Transforms (Machine Learning Research Paper Explained)
2021-05-18AI made this music video | What happens when OpenAI's CLIP meets BigGAN?
2021-05-15DDPM - Diffusion Models Beat GANs on Image Synthesis (Machine Learning Research Paper Explained)
2021-05-11Research Conference ICML drops their acceptance rate | Area Chairs instructed to be more picky
2021-05-08Involution: Inverting the Inherence of Convolution for Visual Recognition (Research Paper Explained)
2021-05-06MLP-Mixer: An all-MLP Architecture for Vision (Machine Learning Research Paper Explained)
2021-05-04I'm out of Academia
2021-05-01DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)
2021-04-30Why AI is Harder Than We Think (Machine Learning Research Paper Explained)
2021-04-27I COOKED A RECIPE MADE BY A.I. | Cooking with GPT-3 (Don't try this at home)
2021-04-19NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained)
2021-04-14I BUILT A NEURAL NETWORK IN MINECRAFT | Analog Redstone Network w/ Backprop & Optimizer (NO MODS)
2021-04-11DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning
2021-04-07PAIR AI Explorables | Is the problem in the data? Examples on Fairness, Diversity, and Bias.
2021-03-30Machine Learning PhD Survival Guide 2021 | Advice on Topic Selection, Papers, Conferences & more!



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
what is deep learning
deep learning tutorial
introduction to deep learning
computer vision
convolutional neural network
convolutions alternative
cnn attention
self attention
attention mechanism for vision
weight sharing neural networks
convolutions vision
cnn vision
involution vision
image segmentation
rednet
resnet
residual neural networks
bytedance ai