A Roadmap for AI: Past, Present and Future (Part 1)

Subscribers:
5,450
Published on ● Video Link: https://www.youtube.com/watch?v=VP4DDdUsGws



Duration: 1:31:09
676 views
30


Join me to imagine the future of AI.

In this sharing session, I cover what has been done so far in terms of Expert Knowledge Systems (learning rules from experts), Supervised Learning (human-labelled data), Unsupervised Learning (non human-labelled data).

I also cover how we can perhaps extend the unsupervised learning domain from text to images, audio and other domains. I end off with a note saying that while large-scale models trained on large data (termed foundational models) might be useful for bootstrapping purposes, we will ultimately need some form of learning from the environment for self-improvement to take place. This will form the backdrop of the next session.

In the next session, I will show how we can use memory as the next wave to improve AI systems (this also includes the Large Language Model grounding with Knowledge Graphs).
The last part covers agents, multiple systems in one agent, multiple agents in an ecosystem, and finally, multiple ecosystems.

The future is unknown, but what is sure is that technological improvement will lead us to something which is far more advanced than the current state of the art.
My research work seeks to help to attain fast and adaptable agents, and I believe this will be key for the future of AI systems.

~~~~~~~~~~~~~~~~~~~

Slides: https://github.com/tanchongmin/TensorFlow-Implementations/blob/main/Paper_Reviews/A%20Roadmap%20for%20AI.pdf

Related Resources:
A. Expert Systems:
Cyc (Knowledge Graph): https://cyc.com/platform/
Logical Systems (First Order Logic): https://www.cl.cam.ac.uk/teaching/2021/LogicProof/logic-notes.pdf
Universal Approximation Theorem: https://en.wikipedia.org/wiki/Universal_approximation_theorem
Gödel Incompleteness Theorem: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems

B. Supervised Learning
Some (Mathematical) Notes by Andrew Ng: https://cs229.stanford.edu/lectures-spring2022/main_notes.pdf

C. Unsupervised Learning / LLMs Next-token Prediction
How ChatGPT works: https://www.youtube.com/watch?v=wA8rjKueB3Q
GenAI Study Group Introduction: https://www.youtube.com/watch?v=h-73Eu64FgQ

D. Transformer for other modalities
VALL-E (Audio): https://www.youtube.com/watch?v=G9k-2mYl6Vo
OpenAI JukeBox (Audio): https://openai.com/research/jukebox
I-JEPA (Image): https://www.youtube.com/watch?v=M98OLk30dBk
SayCan (Robotics): https://www.youtube.com/watch?v=iS3ikfSsp6Y
RT-2 (Robotics): https://deepmind.google/discover/blog/rt-2-new-model-translates-vision-and-language-into-action/

E. Large Foundational Models
Foundational Models Workshop (Neurips 2022): https://neurips.cc/virtual/2022/workshop/49988

~~~~~~~~~~~~~~~~~~~

0:00 Introduction
2:22 Expert Systems
16:18 Supervised Learning - Learning from Human-labelled Data
35:05 Unsupervised Learning - Learning from Unlabelled Data
42:48 Latent Space Prediction is Powerful
1:02:55 Unlocking Unsupervised Learning for other modalities
1:20:06 Large Foundational Models and how we should go from here

~~~~~~~~~~~~~~~~~~~

AI and ML enthusiast. Likes to think about the essences behind breakthroughs of AI and explain it in a simple and relatable way. Also, I am an avid game creator.

Discord: https://discord.gg/bzp87AHJy5
LinkedIn: https://www.linkedin.com/in/chong-min-tan-94652288/
Online AI blog: https://delvingintotech.wordpress.com/
Twitter: https://twitter.com/johntanchongmin
Try out my games here: https://simmer.io/@chongmin




Other Videos By John Tan Chong Min


2023-12-08Is Gemini better than GPT4? Self-created benchmark - Fact Retrieval/Checking, Coding, Tool Use
2023-12-04Learning, Fast and Slow: 10 Years Plan - Memory Soup, Hier. Planning, Emotions, Knowledge Sharing
2023-12-01Tutorial #12: Use ChatGPT and off-the-shelf RAG on Terminal/Command Prompt/Shell - SymbolicAI
2023-11-20JARVIS-1: Multi-modal (Text + Image) Memory + Decision Making with LLMs in MineCraft!
2023-11-20Tutorial #11: Virtual Persona from Documents, Multi-Agent Chat, Text-to-Speech to hear your Personas
2023-11-14A Roadmap for AI: Past, Present and Future (Part 3) - Multi-Agent, Multiple Sampling and Filtering
2023-11-07Learning, Fast and Slow: My Landmark Idea for fast, adaptable agents (ICDL 2023 Best Paper Finalist)
2023-11-06A roadmap for AI: Past, Present and Future (Part 2): Fixed vs Flexible, Memory Soup vs Hierarchy
2023-11-03AI & Education: Education when AI tools are smarter than us - Discussion with Kuang Wen (Part 2)
2023-11-03AI & Education: RAG Question-Answer, Test Question Generator, Autograder by Kuang Wen! (Part 1)
2023-10-31A Roadmap for AI: Past, Present and Future (Part 1)
2023-10-28Tutorial #10: StrictJSON v2 (StrictText): Handle any output - quotation marks or backslash!
2023-10-24ChatDev: Can LLM Agents really replace a software company?
2023-10-17LLMs and Robotics: An Overview by Daniel Tan!
2023-10-17LLM Q&A #1: Prompting vs Fine-Tuning, More vs Fewer Sources for RAG, Prompting vs LLMs as a System
2023-10-10LLMs as a System of Multiple Expert Agents to solve the ARC Challenge (Detailed Walkthrough)
2023-09-26Everything about LLM Agents - Chain of Thought, Reflection, Tool Use, Memory, Multi-Agent Framework
2023-09-19Moving Beyond Probabilities: Memory as World Modelling
2023-09-05Symbolic Regression: Doing What LLMs cannot - Deriving Arbitrary Mathematical Relations!
2023-08-29LLM Agents as a System (Prelim Findings Sharing): An Attempt to solve a 2-player 2D Escape Room!
2023-08-23LLM as Pattern Machines(Part 2) - Goal Directed Decision Transformers, 10-Year Plan for Intelligence