Expanding the Capabilities of Language Models with External Tools

Published on ● Video Link: https://www.youtube.com/watch?v=WVV-lkYVLRY



Duration: 43:10
174 views
3


see more slides, notes, and other material here: https://github.com/Aggregate-Intellect/practical-llms/

https://www.linkedin.com/in/piesauce/

The field of NLP is rapidly evolving with new models, tools, and techniques being introduced regularly. In fact, 90% of the content in the presentation did not exist a few months ago, and the content about LangChain and LlamaIndex is set to become woefully outdated within a month or two because those libraries are coming out with so many new updates. GPT-4 is evidence of the fast pace of development in the field. #NLP #GPT4 #MachineLearning

Instruction tuning is a powerful technique for improving LLMs and making them more aligned with human preferences. OpenAI’s Instructor-GPT paper introduced the use of reinforcement learning and human feedback to align LMs/LLMs with human preferences. #InstructionTuning #LLMs #ReinforcementLearning

Evaluations are ongoing to determine the full capabilities of LLMs. The speaker notes that LLMs are good at following instructions and query understanding, but their limitations are not fully understood yet, especially in their reasoning capabilities. #LLMs #Evaluations #MachineLearning

Being 99% accurate is not enough for many applications, especially if the failure cases are unpredictable. This is true for self-driving cars and may also be the case for a wide variety of NLP applications. #MachineLearning #NLP #Accuracy #selfdrivingcars

Language models break down complex questions into manageable components. For example, they can answer “who was the CTO of Apple when its share price was lowest in the last 10 years?” by finding the date of the lowest share price and the CTO on that date.

LLMs can filter out irrelevant results and automate tasks like flight searches. They call external APIs and databases and then synthesize coherent answers based on the output.

LLMs can answer questions about a company’s policies, product planning, or any other information stored in Google Docs or Notion by using the data connectors provided by Lama index.

LLMs can ensure answers don’t contain personal identifiable information or misinformation using a moderation chain. Making an LLM good at a particular domain is exciting, as passing exams in a particular domain is easy due to data contamination.




Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE


2023-06-13Invest in Deep Tech like an Engineer - Deep Random Talks
2023-06-12Total Recall with NLP and LLMs - Deep Random Talks
2023-05-22Running LLMs in Your Environment
2023-05-22​Building with LLMs Using LangChain
2023-05-21Building ResearchLLM: automated statistical research and interpretation
2023-05-21Learning-free Controllable Text Generation for Debiasing
2023-05-21ChatGPT-like application for construction of mathematical financial models
2023-05-21Modern Innovations in Fine-Tuning Large Language Models
2023-05-21Exploring the agency limits of today's LLMs
2023-05-21Optimizing Large Language Models with Reinforcement Learning-Based Prompts
2023-05-21Expanding the Capabilities of Language Models with External Tools
2023-03-22Leveraging Language Models for Training Data Generation and Tool Learning
2023-03-22Generative AI: Ethics, Accessibility, Legal Risk Mitigation
2023-03-22Incorporating Large Language Models into Enterprise Analytics
2023-03-22Integrating LLMs into Your Product: Considerations and Best Practices
2023-03-22Commercializing LLMs: Lessons and Ideas for Agile Innovation
2023-03-22The Emergence of KnowledgeOps
2023-02-28Neural Search for Augmented Decision Making - Zeta Alpha - DRT S2E17
2023-02-21Distributed Data Engineering for Science - OpSci - Holonym - DRT S2E16
2023-02-14Data Products - Accumulation of Imperfect Actions Towards a Focused Goal - DRT S2E15
2023-02-07Unfolding the Maze of Funding Deep Tech; Metafold - DRT S2E14 - Ft. Moien Giashi, Alissa ross



Tags:
deep learning
machine learning