How do we learn so fast? Towards a biologically plausible model for one-shot learning.

Subscribers:
5,330
Published on ● Video Link: https://www.youtube.com/watch?v=X3Bu3LmJby0



Changes
Game:
Changes (2021)
Category:
Vlog
Duration: 1:35:55
366 views
16


M Ganesh Kumar shares about his work on modelling one-shot learning in a biologically plausible way, primarily focused on a neuroscience angle. He uses hebbian learning to change synaptic weights, reinforcement learning using Actor and Critic, as well as planning with working memory!

Abstract:
One-shot learning is the ability to learn to solve a problem after a single trial, a feat achievable by algorithms and animals. However, how the brain might perform one-shot learning remains poorly understood as most algorithms are not biologically plausible. After gradually learning multiple cue-location paired associations (PA), rodents learned new PAs after a single trial (Tse et al., 2007), demonstrating one-shot learning. We demonstrate reinforcement learning agents that learn multiple PAs but fail to demonstrate one-shot learning of new PAs. We introduce three biologically plausible knowledge structures or schemas to the agent, 1) the ability to learn a metric representation of the environment 2) the ability to form associations between each cue and its goal location after one trial and 3) the ability to compute the direction to arbitrary goals from current location. After gradual learning, agents learned multiple new PAs after a single trial, replicating the rodent results.

Speaker Information: https://mgkumar138.github.io/

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
References:

Slides: https://github.com/tanchongmin/TensorFlow-Implementations/blob/main/Paper_Reviews/One-shot%20Learning%20(Ganesh).pdf

A nonlinear hidden layer enables actor–critic agents to learn multiple paired association navigation - https://academic.oup.com/cercor/article-abstract/32/18/3917/6509014

One-shot learning of paired association navigation with biologically plausible schemas - https://arxiv.org/abs/2106.03580

My work on Learning, Fast and Slow: https://www.youtube.com/watch?app=desktop&v=Hr9zW7Usb7I

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Other Reading Materials suggested by Ganesh:

Random synaptic feedback weights support error backpropagation for deep learning - https://www.nature.com/articles/ncomms13276

Local online learning in recurrent networks with random feedback - https://elifesciences.org/articles/43299

Prefrontal cortex as a meta-reinforcement learning system - https://www.nature.com/articles/s41593-018-0147-8

Task representations in neural networks trained to perform many cognitive tasks - https://www.nature.com/articles/s41593-018-0310-2

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

0:00 Introduction
2:50 Non-linear representations necessary for one-shot learning
22:43 One-shot learning of new Paired Associations
27:11 Reward modulated Exploratory Hebbian rule
34:27 Hopfield networks do not do one-shot association
37:03 Deleting memories is important for learning
39:12 Goal-directed systems
41:21 Composing three agents into one!
48:52 Adding gates can help minimize distractions
53:00 Question on Forgetting Mechanism
56:24 What if useful signals come after distractors?
1:00:29 Can we learn reward-free?
1:07:16 DetermiNet: Large-Scale Dataset for Complex Visually-Grounded Referencing using Determiners
1:13:51 Summary
1:20:36 Discussion
1:24:20 Towards Biologically-Plausible Neurosymbolic Architectures

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

AI and ML enthusiast. Likes to think about the essences behind breakthroughs of AI and explain it in a simple and relatable way. Also, I am an avid game creator.

Discord: https://discord.gg/bzp87AHJy5
LinkedIn: https://www.linkedin.com/in/chong-min-tan-94652288/
Online AI blog: https://delvingintotech.wordpress.com/
Twitter: https://twitter.com/johntanchongmin
Try out my games here: https://simmer.io/@chongmin




Other Videos By John Tan Chong Min


2023-08-15LLMs as General Pattern Machines: Use Arbitrary Tokens to Pattern Match?
2023-08-08Tutorial #6: LangChain & StrictJSON Implementation of Knowledge Graph Question Answer with LLMs
2023-08-08Large Language Models and Knowledge Graphs: Merging Flexibility and Structure
2023-07-31Tutorial #5: SymbolicAI - Automatic Retrieval Augmented Generation, Multimodal Inputs, User Packages
2023-07-27How Llama 2 works: Ghost Attention, Quality Supervised Fine-tuning, RLHF for Safety and Helpfulness
2023-07-27Llama 2 vs ChatGPT
2023-07-11I-JEPA: Importance of Predicting in Latent Space
2023-07-09Gen AI Study Group Introductory Tutorial - Transformers, ChatGPT, Prompt Engineering, Projects
2023-07-03Tutorial #5: Strict JSON LLM Framework - Get LLM to output JSON exactly the way you want it!
2023-07-01Tutorial #4: SymbolicAI ChatBot In-Depth Demonstration (Tool Use and Iterative Processing)
2023-06-29How do we learn so fast? Towards a biologically plausible model for one-shot learning.
2023-06-20LLMs as a system to solve the Abstraction and Reasoning Corpus (ARC) Challenge!
2023-06-16Tutorial #3: Symbolic AI - Symbols, Operations, Expressions, LLM-based functions!
2023-06-13No more RL needed! LLMs for high-level planning: Voyager + Ghost In the Minecraft
2023-06-06Voyager - An LLM-based curriculum generator, actor and critic, with skill reuse in Minecraft!
2023-06-01Evolution ChatGPT Prompt Game - From Bacteria to.... Jellyfish???
2023-05-30Prompt Engineering and LLMOps: Tips and Tricks
2023-05-25Hierarchy! The future of AI: How it helps representations and why it is important.
2023-05-18Prediction builds representations! Fixed Bias speeds up learning!
2023-05-09Memory: How is it encoded, retrieved and how it can be used for learning systems
2023-05-02I created a Law Court Simulator with GPT4!



Other Statistics

Changes Statistics For John Tan Chong Min

There are 366 views in 1 video for Changes. About an hours worth of Changes videos were uploaded to his channel, less than 0.51% of the total video content that John Tan Chong Min has uploaded to YouTube.