RAG Pipeline Using Standard Libraries and OPEA | AI with Guy |

Subscribers:
256,000
Published on ● Video Link: https://www.youtube.com/watch?v=Ckl2XD1R3zU



Duration: 0:00
234 views
13


Build your own RAG (Retrieval-Augmented Generation) pipeline using standard open-source tools—then swap in commercial-grade microservices at any stage using OPEA: the Open Platform for Enterprise AI.

In this demo, you’ll see how to create a flexible GenAI pipeline using LangChain and LangGraph, and then dynamically replace components (retriever, embedding, vector DB, or LLM) with production-ready services powered by OPEA’s plug-and-play architecture.

This is ideal for developers who want to:
– Build a RAG system fast using LangChain and LangGraph
– Test and compare local vs commercial GenAI components
– Future-proof pipelines by making each stage modular and replaceable
– Experiment with chaining open-source + enterprise AI tools together

Code & Docs:
https://github.com/opea-project/LangChain-OPEA

About Intel Software:
Intel® Developer Zone is committed to empowering and assisting software developers in creating applications for Intel hardware and software products. The Intel Software YouTube channel is an excellent resource for those seeking to enhance their knowledge. Our channel provides the latest news, helpful tips, and engaging product demos from Intel and our numerous industry partners. Our videos cover various topics; you can explore them further by following the links.

Connect with Intel Software:
INTEL SOFTWARE WEBSITE:https://intel.ly/2KeP1hDD
INTEL SOFTWARE on FACEBOOK:http://bit.ly/2z8MPFFF
INTEL SOFTWARE on TWITTER:http://bit.ly/2zahGSnn
INTEL SOFTWARE GITHUB:http://bit.ly/2zaih6zz
INTEL DEVELOPER ZONE LINKEDIN:http://bit.ly/2z979qss
INTEL DEVELOPER ZONE INSTAGRAM:http://bit.ly/2z9Xsbyy
INTEL GAME DEV TWITCH:http://bit.ly/2BkNshuu

#intelsoftware

RAG Pipeline Using Standard Libraries and OPEA | AI with Guy | Intel Software




Other Videos By Intel Software


2025-06-24OpenVINO accelerating Copilot+ AI-PC | AI with Guy
2025-06-24Intel AI Playground with OpenVINO Backend | AI with Guy
2025-06-18Get Started on Intel Gaudi AI Accelerators Using Your Existing GPU Code | Intel Software
2025-06-13Discover Robotic AI Innovation at the Edge | Intel Software
2025-06-13Run Ollama + Web-UI on Your AI PC | AI With Guy
2025-06-13Get a GPU VM in One Click | AI With Guy
2025-06-11The Heart of HPC Today: Heterogeneous Computing | Intel Software
2025-06-10FAMU-FSU: Up-to-the-Minute Lessons in AI with the help of the Educator Program by Intel
2025-06-10Cornell University: A Support System to Optimize Curriculum, Course Materials and Student Engagement
2025-06-10Cal Poly: Breaking New Ground in Programming Curriculum, Without Reinventing the Wheel
2025-06-10RAG Pipeline Using Standard Libraries and OPEA | AI with Guy |
2025-06-09Run PyTorch 2.7 on Intel GPUs: A Step-by-Step Setup | AI with Guy
2025-06-06GPU Coding Using Triton Compiler | AI with Guy
2025-06-05vLLM Server Using OpenAI API on Gaudi 3 | AI with Guy
2025-06-04Build a Gen-AI Application Across Multiple AWS Instances with OPEA | AI with Guy
2025-06-03OPEA vs. NVIDIA NIM: What’s Best for Your GenAI Deployment?
2025-05-28PyTorch Export Quantization with Intel GPUs | Intel Software
2025-05-23Unlocking Gen AI: From Experimentation to Production with Red Hat & Intel | Intel Software
2025-05-23Overcoming Deployment Challenges: Scaling AI in Edge Computing w/ Red Hat AI & Intel Edge Platforms
2025-05-23Discover AI Innovations at Red Hat Summit with Intel: RHEL AI, OpenShift AI & Edge AI
2025-05-22Explore OpenVINO Model Hub – Instantly Compare AI Model Performance Across Devices | AI with Guy