Imagination-Augmented Agents for Deep Reinforcement Learning
Commentary of
https://arxiv.org/abs/1707.06203
Abstract
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
Authors
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, David Silver, Daan Wierstra
Other Videos By Yannic Kilcher
2019-01-30 | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding |
2019-01-09 | What’s in a name? The need to nip NIPS |
2018-12-21 | Stochastic RNNs without Teacher-Forcing |
2018-12-18 | Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations |
2018-04-07 | World Models |
2018-03-18 | Curiosity-driven Exploration by Self-supervised Prediction |
2017-12-13 | git for research basics: fundamentals, commits, branches, merging |
2017-11-28 | Attention Is All You Need |
2017-08-28 | Reinforcement Learning with Unsupervised Auxiliary Tasks |
2017-08-09 | Learning model-based planning from scratch |
2017-08-04 | Imagination-Augmented Agents for Deep Reinforcement Learning |