A Roadmap for AI: Past, Present and Future (Part 3) - Multi-Agent, Multiple Sampling and Filtering

Subscribers:
5,330
Published on ● Video Link: https://www.youtube.com/watch?v=LuEyoLgxNkc



StarCraft
Game:
StarCraft (1998)
Category:
Discussion
Duration: 1:49:32
431 views
16


In this third and final session, we move beyond the individual system - moving beyond a basic formulation like GPTs - and going to multi-agent, multi-population systems.

We will also talk about the importance of reflection and memory sharing between agents.

We will also debate whether Artificial General Intelligence / Artificial Super Intelligence and the Singularity can be achieved. If there were a catchphrase for AGI/ASI, I would propose it to be "Multiple Sampling and Filtering" within each agent, and across agents.

Recap of Session 1 (Past/Present AI systems): Expert Knowledge Systems (learning rules from experts), Supervised Learning (human-labelled data), Unsupervised Learning/Self-Supervised Learning (non human-labelled data), Foundational Models to learn from data and set a baseline for performance.

Recap of Session 2 (Present/Future AI systems): Merging Fixed Structure + Flexible Learning, Imbuing memory for learning. Memory can exist in hierarchical form, or in multiple parallel abstraction/latent spaces.

~~~~~~~~~~~~~~~~~

Slides: https://github.com/tanchongmin/TensorFlow-Implementations/blob/main/Paper_Reviews/A%20Roadmap%20for%20AI%20(Final).pdf

I believe multiple sampling and filtering is the key to self-improvement and AGI/ASI.

Here are some selected works:
Multiple sampling of memory to select best trajectory: Learning, Fast and Slow - Mine (https://arxiv.org/abs/2301.13758)
Multiple sampling of possible programs with different abstraction spaces: Multiple Expert Systems in ARC - Mine (https://arxiv.org/abs/2310.05146)
Monte Carlo Tree Search to sample multiple futures and learn from the best one: AlphaGo / AlphaZero - DeepMind (https://www.nature.com/articles/nature24270)
Multiple Code generation to solve competitive programming: AlphaCode - DeepMind (https://arxiv.org/abs/2203.07814)
Multi-agent population-based methods: AlphaStar - DeepMind (https://deepmind.google/discover/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii/)
Multiple sampling by agents to solve multimodal image-text MineCraft environment: JARVIS-1 (https://arxiv.org/pdf/2311.05997.pdf)

~~~~~~~~~~~~~~~~~

Other References:
Generative Agents: https://www.youtube.com/watch?v=_pkktFIcZRo
OpenAI GPTs: https://openai.com/blog/introducing-gpts
Learning, Fast and Slow: https://www.youtube.com/watch?v=DSVFA7nmwHQ
Hierarchical Temporal Memory (HTM) by Numenta: https://www.numenta.com/resources/research-publications/papers/hierarchical-temporal-memory-white-paper/
LLMs as a System of Multiple Expert Agents to Solve the ARC Challenge: https://www.youtube.com/watch?v=sTvonsD5His
ChatDev (LLM agents to simulate software company): https://www.youtube.com/watch?v=5sXqpCIIuT8
Voyager (LLM agent to self-learn the MineCraft environment): https://www.youtube.com/watch?v=Y-pgbjTlYgk

~~~~~~~~~~~~~~~~~

0:00 Introduction
7:15 Learning through Reflection
18:00 Agent Overview
24:45 Basic Agent - GPTs
27:25 Multiple Agents within same system
33:08 Multiple Specialised Agents within same system
38:10 Learning Skills via Interaction with Environment
49:20 Collective Intelligence
56:29 Knowledge Sharing between Agents
1:13:40 Intelligence via Multiple Populations
1:16:03 Can AGI/ASI be achieved?
1:35:09 Can the singularity be reached?
1:39:58 Discussion

~~~~~~~~~~~~~~~~~

AI and ML enthusiast. Likes to think about the essences behind breakthroughs of AI and explain it in a simple and relatable way. Also, I am an avid game creator.

Discord: https://discord.gg/bzp87AHJy5
LinkedIn: https://www.linkedin.com/in/chong-min-tan-94652288/
Online AI blog: https://delvingintotech.wordpress.com/
Twitter: https://twitter.com/johntanchongmin
Try out my games here: https://simmer.io/@chongmin




Other Videos By John Tan Chong Min


2024-01-29V* - Better than GPT-4V? Iterative Context Refining for Visual Question Answer!
2024-01-23AutoGen: A Multi-Agent Framework - Overview and Improvements
2024-01-09AppAgent: Using GPT-4V to Navigate a Smartphone!
2024-01-08Tutorial #13: StrictJSON, my first Python Package! - Get LLMs to output into a working JSON!
2023-12-20"Are you smarter than an LLM?" game speedrun
2023-12-08Is Gemini better than GPT4? Self-created benchmark - Fact Retrieval/Checking, Coding, Tool Use
2023-12-04Learning, Fast and Slow: 10 Years Plan - Memory Soup, Hier. Planning, Emotions, Knowledge Sharing
2023-12-01Tutorial #12: Use ChatGPT and off-the-shelf RAG on Terminal/Command Prompt/Shell - SymbolicAI
2023-11-20JARVIS-1: Multi-modal (Text + Image) Memory + Decision Making with LLMs in MineCraft!
2023-11-20Tutorial #11: Virtual Persona from Documents, Multi-Agent Chat, Text-to-Speech to hear your Personas
2023-11-14A Roadmap for AI: Past, Present and Future (Part 3) - Multi-Agent, Multiple Sampling and Filtering
2023-11-07Learning, Fast and Slow: My Landmark Idea for fast, adaptable agents (ICDL 2023 Best Paper Finalist)
2023-11-06A roadmap for AI: Past, Present and Future (Part 2): Fixed vs Flexible, Memory Soup vs Hierarchy
2023-11-03AI & Education: Education when AI tools are smarter than us - Discussion with Kuang Wen (Part 2)
2023-11-03AI & Education: RAG Question-Answer, Test Question Generator, Autograder by Kuang Wen! (Part 1)
2023-10-31A Roadmap for AI: Past, Present and Future (Part 1)
2023-10-28Tutorial #10: StrictJSON v2 (StrictText): Handle any output - quotation marks or backslash!
2023-10-24ChatDev: Can LLM Agents really replace a software company?
2023-10-17LLMs and Robotics: An Overview by Daniel Tan!
2023-10-17LLM Q&A #1: Prompting vs Fine-Tuning, More vs Fewer Sources for RAG, Prompting vs LLMs as a System
2023-10-10LLMs as a System of Multiple Expert Agents to solve the ARC Challenge (Detailed Walkthrough)



Other Statistics

StarCraft Statistics For John Tan Chong Min

There are 431 views in 1 video for StarCraft. About an hours worth of StarCraft videos were uploaded to his channel, less than 0.58% of the total video content that John Tan Chong Min has uploaded to YouTube.