Creating JARVIS: ChatGPT + APIs - HuggingGPT, Memory-Augmented Context, Meta GPT structures
This is Part 2 of the LLMs with Tools/Plugins/API discussion session. In this one crazy week of AI, we already have TaskMatrix.AI, which can link millions of APIs with one GPT model, and HuggingGPT, an interface of GPT with multiple HuggingFace models. HuggingGPT is particularly interesting, as it has a memory-augmented retrieval of various input/output examples to form a broad plan. TaskMatrix.AI has a mechanism that enables the API to learn from feedback.
We then see how GPTs can be linked to a greater ecosystem in a hierarchical fashion, such as Prompt Manager and GPT, and recurrent fashion, such as Socratic Models.
I believe with an increase in the APIs used, we will soon reach the token limit of in-context prompting, and increasing the context length will be of paramount importance. We discuss Memformer, Memorizing Transformer, and a task input/output memory storage and retrieval system, and whether they can solve the problem.
LLMs with tools will be the next step forward. Whether it can achieve AGI remains to be seen. However, I do believe that multiple models of GPT in a multi-agent format might be able to achieve more than the sum of its parts, and can become a very powerful system. Watch this space.
~~~~~~~~~~~~~~~~
Slides: https://github.com/tanchongmin/TensorFlow-Implementations/blob/main/Paper_Reviews/LLMs%20as%20API%20Interface.pdf
Related videos:
Part 1 of LLM and Tools (Toolformer, Visual ChatGPT, Wolfram Alpha Plugin): https://www.youtube.com/watch?v=J1Xj0xXmtHU
How ChatGPT works: https://www.youtube.com/watch?v=wA8rjKueB3Q
Learning, Fast and Slow (Adaptive learning): https://www.youtube.com/watch?v=Hr9zW7Usb7I
Reference Materials:
TaskMatrix.AI: https://github.com/microsoft/visual-chatgpt/tree/main/TaskMatrix.AI
HuggingGPT (aka JARVIS): https://github.com/microsoft/JARVIS
Lottery Ticket Hypothesis: https://arxiv.org/abs/1803.03635
Neural Darwinism: https://en.wikipedia.org/wiki/Neural_Darwinism
Reflexion: https://arxiv.org/abs/2303.11366
Socratic Models: https://arxiv.org/abs/2204.00598
Memformer: https://aclanthology.org/2022.findings-aacl.29/
Memorizing Transformers: https://arxiv.org/abs/2203.08913
Task-driven Autonomous Agent Utilizing GPT-4, Pinecone, and LangChain for Diverse Applications: https://yoheinakajima.com/task-driven-autonomous-agent-utilizing-gpt-4-pinecone-and-langchain-for-diverse-applications/
~~~~~~~~~~~~~~~~~~
0:00 Introduction
0:22 TaskMatrix.AI
10:02 HuggingGPT (aka JARVIS)
24:57 Prompts for HuggingGPT
31:18 HuggingGPT Input to Output Flowchart
41:56 Emergence via LLMs connected hierarchically or recurrently
1:03:12 Memory
1:11:51 Discussion
1:34:16 Conclusion
~~~~~~~~~~~~~~~~~~~
AI and ML enthusiast. Likes to think about the essences behind breakthroughs of AI and explain it in a simple and relatable way. Also, I am an avid game creator.
Discord: https://discord.gg/fXCZCPYs
LinkedIn: https://www.linkedin.com/in/chong-min-tan-94652288/
Online AI blog: https://delvingintotech.wordpress.com/.
Twitter: https://twitter.com/johntanchongmin
Try out my games here: https://simmer.io/@chongmin