Fast reinforcement learning with generalized policy updates (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=9-o2aAoN0rY



Duration: 55:11
10,228 views
349


#ai #research #reinforcementlearning

Reinforcement Learning is a powerful tool, but it is also incredibly data-hungry. Given a new task, an RL agent has to learn a good policy entirely from scratch. This paper proposes a new framework that allows an agent to carry over knowledge from previous tasks into solving new tasks, even deriving zero-shot policies that perform well on completely new reward functions.

OUTLINE:
0:00 - Intro & Overview
1:25 - Problem Statement
6:25 - Q-Learning Primer
11:40 - Multiple Rewards, Multiple Policies
14:25 - Example Environment
17:35 - Tasks as Linear Mixtures of Features
24:15 - Successor Features
28:00 - Zero-Shot Policy for New Tasks
35:30 - Results on New Task W3
37:00 - Inferring the Task via Regression
39:20 - The Influence of the Given Policies
48:40 - Learning the Feature Functions
50:30 - More Complicated Tasks
51:40 - Life-Long Learning, Comments & Conclusion

Paper: https://www.pnas.org/content/early/2020/08/13/1907370117

My Video on Successor Features: https://youtu.be/KXEEqcwXn8w

Abstract:
The combination of reinforcement learning with deep learning is a promising approach to tackle important sequential decision-making problems that are currently intractable. One obstacle to overcome is the amount of data needed by learning systems of this type. In this article, we propose to address this issue through a divide-and-conquer approach. We argue that complex decision problems can be naturally decomposed into multiple tasks that unfold in sequence or in parallel. By associating each task with a reward function, this problem decomposition can be seamlessly accommodated within the standard reinforcement-learning formalism. The specific way we do so is through a generalization of two fundamental operations in reinforcement learning: policy improvement and policy evaluation. The generalized version of these operations allow one to leverage the solution of some tasks to speed up the solution of others. If the reward function of a task can be well approximated as a linear combination of the reward functions of tasks previously solved, we can reduce a reinforcement-learning problem to a simpler linear regression. When this is not the case, the agent can still exploit the task solutions by using them to interact with and learn about the environment. Both strategies considerably reduce the amount of data needed to solve a reinforcement-learning problem.

Authors:
André Barreto, Shaobo Hou, Diana Borsa, David Silver, and Doina Precup

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2020-10-17LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained)
2020-10-11Descending through a Crowded Valley -- Benchmarking Deep Learning Optimizers (Paper Explained)
2020-10-04An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)
2020-10-03Training more effective learned optimizers, and using them to train themselves (Paper Explained)
2020-09-18The Hardware Lottery (Paper Explained)
2020-09-13Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)
2020-09-07Learning to summarize from human feedback (Paper Explained)
2020-09-02Self-classifying MNIST Digits (Paper Explained)
2020-08-28Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
2020-08-26Radioactive data: tracing through training (Paper Explained)
2020-08-23Fast reinforcement learning with generalized policy updates (Paper Explained)
2020-08-20What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study (Paper Explained)
2020-08-18[Rant] REVIEWER #2: How Peer Review is FAILING in Machine Learning
2020-08-14REALM: Retrieval-Augmented Language Model Pre-Training (Paper Explained)
2020-08-12Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
2020-08-09Hopfield Networks is All You Need (Paper Explained)
2020-08-06I TRAINED AN AI TO SOLVE 2+2 (w/ Live Coding)
2020-08-04PCGRL: Procedural Content Generation via Reinforcement Learning (Paper Explained)
2020-08-02Big Bird: Transformers for Longer Sequences (Paper Explained)
2020-07-29Self-training with Noisy Student improves ImageNet classification (Paper Explained)
2020-07-26[Classic] Playing Atari with Deep Reinforcement Learning (Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
rl
deep rl
q learning
deep reinforcement learning
q learning machine learning
deep q learning
successor features
deep mind
zero shot
environment
agent
task
linear
regression
reward
mila
neural network
reinforcement learning
value function
state value function
state value