Large Language Models as a Building Blocks

Published on ● Video Link: https://www.youtube.com/watch?v=iVCp0Jq-2UA



Duration: 35:34
378 views
16


Check out my essays: https://aisc.substack.com/
OR book me to talk: https://calendly.com/amirfzpr
OR subscribe to our event calendar: https://lu.ma/aisc-llm-school
OR sign up for our LLM course: https://maven.com/aggregate-intellect/llm-systems

🟒 Search systems can be improved by using language models to understand the meaning of queries and documents, rather than just matching keywords.
🟒 Semantic search can be broken down into two main components: dense retrieval and ranking. Dense retrieval uses embeddings to find similar documents, while ranking uses a language model to score the relevance of each document to the query.
🟒 Retrieval augmented generation (RAG) is a technique that combines search and generation. In RAG, a search system is used to find documents that are relevant to a query, and then a language model is used to generate text that summarizes or answers the query based on those documents.
🟒 RAG can be improved by using query rewriting to reformulate the user's query into a more searchable form.
🟒 Large language models (LLMs) are still under development, and it is important to be aware of their limitations. For example, LLMs can sometimes generate text that is incorrect or misleading.







Tags:
deep learning
machine learning