Thinking While Moving: Deep Reinforcement Learning with Concurrent Control

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=pZyxlf6l0N8



Duration: 29:41
2,840 views
66


Classic RL "stops" the world whenever the Agent computes a new action. This paper considers a more realistic scenario where the agent is thinking about the next action to take while still performing the last action. This results in a fascinating way of reformulating Q-learning in continuous time, then introducing concurrency and finally going back to discrete time.

https://arxiv.org/abs/2004.06089

Abstract:
We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action. Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed. In order to develop an algorithmic framework for such concurrent control problems, we start with a continuous-time formulation of the Bellman equations, and then discretize them in a way that is aware of system delays. We instantiate this new class of approximate dynamic programming methods via a simple architectural extension to existing value-based deep reinforcement learning algorithms. We evaluate our methods on simulated benchmark tasks and a large-scale robotic grasping task where the robot must "think while moving".

Authors: Ted Xiao, Eric Jang, Dmitry Kalashnikov, Sergey Levine, Julian Ibarz, Karol Hausman, Alexander Herzog

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher







Tags:
deep learning
machine learning
reinforcement learning
vector to go
vtg
continuous
control
robot
concurrent
deep rl
deep neural networks
berkeley
google
grasping
qlearning