Fast and Slow Learning of Recurrent Independent Mechanisms (Machine Learning Paper Explained)

Fast and Slow Learning of Recurrent Independent Mechanisms (Machine Learning Paper Explained)

Subscribers:
253,000
Published on ● Video Link: https://www.youtube.com/watch?v=kU-tWy_wr78



Duration: 45:06
9,138 views
263


#metarim #deeprl #catastrophicforgetting

Reinforcement Learning is very tricky in environments where the objective shifts over time. This paper explores agents in multi-task environments that are usually subject to catastrophic forgetting. Building on the concept of Recurrent Independent Mechanisms (RIM), the authors propose to separate the learning procedures for the mechanism parameters (fast) and the attention parameters (slow) and achieve superior results and more stability, and even better zero-shot transfer performance.

OUTLINE:
0:00 - Intro & Overview
3:30 - Recombining pieces of knowledge
11:30 - Controllers as recurrent neural networks
14:20 - Recurrent Independent Mechanisms
21:20 - Learning at different time scales
28:40 - Experimental Results & My Criticism
44:20 - Conclusion & Comments

Paper: https://arxiv.org/abs/2105.08710
RIM Paper: https://arxiv.org/abs/1909.10893

Abstract:
Decomposing knowledge into interchangeable pieces promises a generalization advantage when there are changes in distribution. A learning agent interacting with its environment is likely to be faced with situations requiring novel combinations of existing pieces of knowledge. We hypothesize that such a decomposition of knowledge is particularly relevant for being able to generalize in a systematic manner to out-of-distribution changes. To study these ideas, we propose a particular training framework in which we assume that the pieces of knowledge an agent needs and its reward function are stationary and can be re-used across tasks. An attention mechanism dynamically selects which modules can be adapted to the current task, and the parameters of the selected modules are allowed to change quickly as the learner is confronted with variations in what it experiences, while the parameters of the attention mechanisms act as stable, slowly changing, meta-parameters. We focus on pieces of knowledge captured by an ensemble of modules sparsely communicating with each other via a bottleneck of attention. We find that meta-learning the modular aspects of the proposed system greatly helps in achieving faster adaptation in a reinforcement learning setup involving navigation in a partially observed grid world with image-level input. We also find that reversing the role of parameters and meta-parameters does not work nearly as well, suggesting a particular role for fast adaptation of the dynamically selected modules.

Authors: Kanika Madan, Nan Rosemary Ke, Anirudh Goyal, Bernhard Schölkopf, Yoshua Bengio

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-06-23XCiT: Cross-Covariance Image Transformers (Facebook AI Machine Learning Research Paper Explained)
2021-06-19AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control (Paper Explained)
2021-06-16[ML News] De-Biasing GPT-3 | RL cracks chip design | NetHack challenge | Open-Source GPT-J
2021-06-11Efficient and Modular Implicit Differentiation (Machine Learning Research Paper Explained)
2021-06-09[ML News] EU regulates AI, China trains 1.75T model, Google's oopsie, Everybody cheers for fraud.
2021-06-08My GitHub (Trash code I wrote during PhD)
2021-06-05Decision Transformer: Reinforcement Learning via Sequence Modeling (Research Paper Explained)
2021-06-02[ML News] Anthropic raises $124M, ML execs clueless, collusion rings, ELIZA source discovered & more
2021-05-31Reward Is Enough (Machine Learning Research Paper Explained)
2021-05-30[Rant] Can AI read your emotions? (No, but ...)
2021-05-29Fast and Slow Learning of Recurrent Independent Mechanisms (Machine Learning Paper Explained)
2021-05-26[ML News] DeepMind fails to get independence from Google
2021-05-24Expire-Span: Not All Memories are Created Equal: Learning to Forget by Expiring (Paper Explained)
2021-05-21FNet: Mixing Tokens with Fourier Transforms (Machine Learning Research Paper Explained)
2021-05-18AI made this music video | What happens when OpenAI's CLIP meets BigGAN?
2021-05-15DDPM - Diffusion Models Beat GANs on Image Synthesis (Machine Learning Research Paper Explained)
2021-05-11Research Conference ICML drops their acceptance rate | Area Chairs instructed to be more picky
2021-05-08Involution: Inverting the Inherence of Convolution for Visual Recognition (Research Paper Explained)
2021-05-06MLP-Mixer: An all-MLP Architecture for Vision (Machine Learning Research Paper Explained)
2021-05-04I'm out of Academia
2021-05-01DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
recurrent independent mechanisms
metarim
deep learning tutorial
introduction to deep learning
what is deep learning
machine learning paper
deep reinforcement learning
reinforcement learning meta learning
yoshua bengio
bentio mila
grid world
fast and slow learning
reinforcement learning attention
catastrophic forgetting
lifelong learning
multitask learning