A Roadmap for AI: Past, Present and Future (Part 1)
Join me to imagine the future of AI.
In this sharing session, I cover what has been done so far in terms of Expert Knowledge Systems (learning rules from experts), Supervised Learning (human-labelled data), Unsupervised Learning (non human-labelled data).
I also cover how we can perhaps extend the unsupervised learning domain from text to images, audio and other domains. I end off with a note saying that while large-scale models trained on large data (termed foundational models) might be useful for bootstrapping purposes, we will ultimately need some form of learning from the environment for self-improvement to take place. This will form the backdrop of the next session.
In the next session, I will show how we can use memory as the next wave to improve AI systems (this also includes the Large Language Model grounding with Knowledge Graphs).
The last part covers agents, multiple systems in one agent, multiple agents in an ecosystem, and finally, multiple ecosystems.
The future is unknown, but what is sure is that technological improvement will lead us to something which is far more advanced than the current state of the art.
My research work seeks to help to attain fast and adaptable agents, and I believe this will be key for the future of AI systems.
~~~~~~~~~~~~~~~~~~~
Slides: https://github.com/tanchongmin/TensorFlow-Implementations/blob/main/Paper_Reviews/A%20Roadmap%20for%20AI.pdf
Related Resources:
A. Expert Systems:
Cyc (Knowledge Graph): https://cyc.com/platform/
Logical Systems (First Order Logic): https://www.cl.cam.ac.uk/teaching/2021/LogicProof/logic-notes.pdf
Universal Approximation Theorem: https://en.wikipedia.org/wiki/Universal_approximation_theorem
Gödel Incompleteness Theorem: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems
B. Supervised Learning
Some (Mathematical) Notes by Andrew Ng: https://cs229.stanford.edu/lectures-spring2022/main_notes.pdf
C. Unsupervised Learning / LLMs Next-token Prediction
How ChatGPT works: https://www.youtube.com/watch?v=wA8rjKueB3Q
GenAI Study Group Introduction: https://www.youtube.com/watch?v=h-73Eu64FgQ
D. Transformer for other modalities
VALL-E (Audio): https://www.youtube.com/watch?v=G9k-2mYl6Vo
OpenAI JukeBox (Audio): https://openai.com/research/jukebox
I-JEPA (Image): https://www.youtube.com/watch?v=M98OLk30dBk
SayCan (Robotics): https://www.youtube.com/watch?v=iS3ikfSsp6Y
RT-2 (Robotics): https://deepmind.google/discover/blog/rt-2-new-model-translates-vision-and-language-into-action/
E. Large Foundational Models
Foundational Models Workshop (Neurips 2022): https://neurips.cc/virtual/2022/workshop/49988
~~~~~~~~~~~~~~~~~~~
0:00 Introduction
2:22 Expert Systems
16:18 Supervised Learning - Learning from Human-labelled Data
35:05 Unsupervised Learning - Learning from Unlabelled Data
42:48 Latent Space Prediction is Powerful
1:02:55 Unlocking Unsupervised Learning for other modalities
1:20:06 Large Foundational Models and how we should go from here
~~~~~~~~~~~~~~~~~~~
AI and ML enthusiast. Likes to think about the essences behind breakthroughs of AI and explain it in a simple and relatable way. Also, I am an avid game creator.
Discord: https://discord.gg/bzp87AHJy5
LinkedIn: https://www.linkedin.com/in/chong-min-tan-94652288/
Online AI blog: https://delvingintotech.wordpress.com/
Twitter: https://twitter.com/johntanchongmin
Try out my games here: https://simmer.io/@chongmin