Building Custom LLMs for Production Inference Endpoints - Wallaroo.ai
Channel:
Subscribers:
117,000
Published on ● Video Link: https://www.youtube.com/watch?v=4gpxteDBJEY
In this session we will dive into the details for how to build, deploy, and optimize custom Large Language Models (LLMs) for production inference environments
This session will cover the key steps for Custom LLMs (LLama), focusing on:
Why custom LLM's?
Inference Performance Optimization
Harmful language Detection
#microsoftreactor #Wallaroo.ai\n\n[eventID:23965]