Hippocampal Replay for Learning (3 min summary)
Using Hippocampal Replay to Consolidate Experiences in Memory-Augmented Reinforcement Learning (Paper ID 38)
In-depth video explaining paper (+ bonus future work of Goal-Directed Intrinsic Reward): https://www.youtube.com/watch?v=SG02XgfzxEg
See updated ideas here in RL Fast and Slow: https://www.youtube.com/watch?v=M10f3ihj3cE
Go-Explore Explanation: https://www.youtube.com/watch?v=oyyOa_nJeDs
Paper link: https://openreview.net/forum?id=RAOVIJ8rZR
Code: https://github.com/tanchongmin/Hippocampal-Replay
#MemARI_2022
Brief description:
Traditional Reinforcement Learning (RL) agents have difficulty learning from a sparse reward signal. To overcome this, we use a similar memory augmentation mechanism as Go-Explore, and store the most competent trajectories in memory. In order to enable consistent performance, we use hippocampal replay (preplay to consolidate states, replay to update memory of states) to generate an "exploration highway" to facilitate exploration of good states in the future. Such a method of performing hippocampal replay leads to consistent performance (higher solve rate), and less exploration (higher minimum number of steps to solve).
~~~~~~~~~~~~~~~~~~~~~~~~~
AI and ML enthusiast. Likes to think about the essences behind breakthroughs of AI and explain it in a simple and relatable way. Also, I am an avid game creator.
Discord: https://discord.gg/fXCZCPYs
LinkedIn: https://www.linkedin.com/in/chong-min-tan-94652288/
Online AI blog: https://delvingintotech.wordpress.com/.
Twitter: https://twitter.com/johntanchongmin
Try out my games here: https://simmer.io/@chongmin