ChatGPT - the Chatbot that Follows Instructions - DRT S2E9

Published on ● Video Link: https://www.youtube.com/watch?v=fEbmS3JW2UM



Category:
Tutorial
Duration: 48:13
567 views
18


In this episode we talked about:
- how things changed since 2017 with transformers; BERT based models as encoders, and GPT based models as decoders
- how scaling laws for LMs got to a point where emergent mult-tasking behavior appeared beyond certain model capacities
- how prompt engineering appeared as a field to control the behavior of LLMs and problems associated with it
- how instruction finetuning has lead into a promising solution to the problems associated with prompt engineering
- how OpenAI team made interesting product decisions to release as a chat bot and the impact of that on creating hype
- opportunities founders and other builders have to create vertical GPT based products

Summary: I think it's incredible that they've been able to get the attention of people outside of the machine learning field, and I commend the product team for their insight. However, I want to add a word of caution to the hype around the model. I've found that it still makes basic language model errors. For example, when I asked it about the smallest congressional district in Canada, it gave me the incorrect answer. I think it's important to remember that while this model is a significant technical progress, there are still many shortcomings and it's crucial to think about the problem we're trying to solve and the right system design for achieving that goal when working with these models.




Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE


2023-03-22Commercializing LLMs: Lessons and Ideas for Agile Innovation
2023-03-22The Emergence of KnowledgeOps
2023-02-28Neural Search for Augmented Decision Making - Zeta Alpha - DRT S2E17
2023-02-21Distributed Data Engineering for Science - OpSci - Holonym - DRT S2E16
2023-02-14Data Products - Accumulation of Imperfect Actions Towards a Focused Goal - DRT S2E15
2023-02-07Unfolding the Maze of Funding Deep Tech; Metafold - DRT S2E14 - Ft. Moien Giashi, Alissa ross
2023-01-31Data Structure for Knowledge = Language Models + Structured Data - DRT S2E13
2023-01-25EVE - Explainable Vector Embeddings - DRT S2E12
2023-01-17LabDAO - Decentralized Marketplace for Research in Life Sciences - DRT S2E11
2023-01-10Data-Driven Behavior Change and Personalization - DRT S2E10
2022-12-20ChatGPT - the Chatbot that Follows Instructions - DRT S2E9
2022-12-16Investing in Deep Tech - Investor's Angle; Deep Random Talks S2E8 - Ft. Moien Giashi, Amir Feizpour
2022-12-09Modern Knowledge Management in 2022 - Deep Random Talks S2E7
2022-12-02TalentDAO- How does decentralized scientific publishing work - Deep Random Talks S2E6
2022-11-25Evaluating Performance of Large Language Models with Linguistics - Deep Random Talks S2E5
2022-11-18Second Brain for Technical Knowledge Management- DRT S2 E4
2022-11-14What are the future plans of Foodshake?
2022-11-14November 14, 2022
2022-11-14Learn about Foodshake and it’s vegan recipes!
2022-11-10Community-Driven Product Development - Deep Random Talk S2E3
2022-11-03Funding Deep Tech Projects: Founder's POV - Deep Random Talk S2E2 - Ft. Moien Giashi, Amir Feizpour



Tags:
deep learning
machine learning
ChatGPT
GPT3
InstructGPT
SFT
GPT3.5
Vertical GPT
Knowledge Management
Knowledgebases