A New Framework of Memory for Learning (Part 1)

Subscribers:
5,330
Published on ● Video Link: https://www.youtube.com/watch?v=q9uMEAcB3lM



Duration: 1:36:10
227 views
9


Over the course of NeurIPS 2022, I have come up with an idea of how to use memory to improve learning in modern neural networks. This is interesting because modern neural networks learn very slowly, especially for reinforcement learning with continuously changing targets, and it would be good if we can imbue a form of memory for it to learn faster.

This is Part 1 of the discussion session, check out Part 2 for the applications to Reinforcement Learning.
Part 2 link here: https://www.youtube.com/watch?v=M10f3ihj3cE

Special thanks to Shuchen for a good discussion. Especially on abstraction. I think it is interesting to think about how memory is stored as the first layer of abstraction, before being mapped to the latent space of word/image embeddings etc.

Slides can be found at: https://github.com/tanchongmin/TensorFlow-Implementations/blob/main/Paper_Reviews/A%20New%20Framework%20of%20Memory%20for%20Learning.pdf

0:00 Motivation
1:37 Star Tracing Task and Henry Molaison
5:07 HM and Modern Neural Networks
8:06 Neural Networks interpolate, not extrapolate
10:43 Memories as abstraction for generalisation
17:28 Discussion on the meaning of abstraction
43:04 Recursive Abstraction
47:31 Lossy Abstraction
49:45 Memories as a multi-modal system
1:07:20 Incomplete Hash
1:10:20 Multiple Referencing the Hash Table
1:19:40 Storing and Forgetting Memories
1:30:10 Inter-memory linkages
1:32:08 Q&A and Teaser for next session

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

AI and ML enthusiast. Likes to think about the essences behind breakthroughs of AI and explain it in a simple and relatable way. Also, I am an avid game creator.

Discord: https://discord.gg/fXCZCPYs
LinkedIn: https://www.linkedin.com/in/chong-min-tan-94652288/
Online AI blog: https://delvingintotech.wordpress.com/.
Twitter: https://twitter.com/johntanchongmin
Try out my games here: https://simmer.io/@chongmin




Other Videos By John Tan Chong Min


2023-02-28Learning Part-Whole Structure by Chunking - More Efficient than Deep Learning!!!
2023-02-21High-level planning with large language models - SayCan
2023-02-13Learning, Fast and Slow: Towards Fast and Adaptable Agents in Changing Environments
2023-02-07Using Logic Gates as Neurons - Deep Differentiable Logic Gate Networks!
2023-01-31Learn from External Memory, not just Weights: Large-Scale Retrieval for Reinforcement Learning
2023-01-17How ChatGPT works - From Transformers to Reinforcement Learning with Human Feedback (RLHF)
2023-01-09HyperTree Proof Search - Automated Theorem Proving with AlphaZero and Transformers!
2022-12-23CodinGame Fall Challenge 2022: A First Look (managed to get to Silver!)
2022-12-21Can ChatGPT solve CodinGame/Google Kickstart problems?
2022-12-19Reinforcement Learning Fast and Slow: Goal-Directed and Memory Retrieval Mechanism!
2022-12-12A New Framework of Memory for Learning (Part 1)
2022-11-14Hippocampal Replay for Learning (Full Length with Questions)
2022-11-14Hippocampal Replay for Learning (3 min summary)
2022-11-07AlphaTensor: Using Reinforcement Learning for Efficient Matrix Multiplication
2022-10-27Playing Go on TyGem and learning from AI (~ 3 kyu)
2022-10-13Heroes of Might and Magic III - Armageddon's Blade Campaign (First Playthrough) - Final!!!
2022-10-13Heroes of Might and Magic III - Armageddon's Blade Campaign (First Playthrough) - Part 6
2022-10-11Playing Go on Tygem + AI Analysis (~4 kyu)
2022-10-11Heroes of Might and Magic III - Armageddon's Blade Campaign (First Playthrough) - Part 5
2022-10-11Heroes of Might and Magic III - Armageddon's Blade Campaign (First Playthrough) - Part 4
2022-10-10Playing Go on Tygem + AI Analysis (~4 kyu)