Retrieval Augmented Generation (RAG) with Langchain: A Complete Tutorial
This comprehensive tutorial guides you through building Retrieval Augmented Generation (RAG) systems using LangChain. We cover everything from setting up your environment with environment variables and working with chat models (including Ollama), to the core components of RAG: loading and splitting documents, creating embeddings, storing them in vector databases, and using retrievers. The video culminates in practical, full RAG examples, including a basic RAG pipeline, a web-based RAG application, and an extended web RAG implementation, demonstrating how to connect your LLM to external knowledge sources like the web for more accurate and informed responses. Learn how to empower your large language models with real-world data and reduce hallucinations using practical, hands-on examples. #machinelearning #ai #langchain
Code: https://github.com/KodySimpson/rag-langchain
Join the Community! - https://rebrand.ly/discordlink
Want to Support the Channel?
Become a Member:
https://buymeacoffee.com/kodysimpson
My Socials:
Github: https://github.com/KodySimpson
Instagram: https://www.instagram.com/kody_a_simpson/
Twitter: https://twitter.com/kodysimp
Blog: https://simpson.hashnode.dev/
Timestamps:
0:00:00 - Introduction
0:06:42 - Environment Setup
0:12:05 - Getting an OpenAI Key
0:14:00 - Environment Variables
0:18:01 - Chat Models
0:32:16 - Using Ollama
0:36:42 - Document Loaders
0:47:24 - Splitting
1:01:16 - Embeddings & Vector Stores
1:22:18 - Retrievers
1:28:42 - Full RAG Example
1:40:39 - Web RAG App
1:58:20 - Adding File Uploading
2:09:16 - Outro
More Videos coming soon.
Leave a comment for any future video suggestions.