A short introduction to SLMs, Reasoning models and local model with use cases
Channel:
Subscribers:
114,000
Published on ● Video Link: https://www.youtube.com/watch?v=6rInrvdK818
In this episode, we’ll explore Small Language Models (SLMs) and how they compare to larger models. SLMs are efficient, require less compute and memory, and can run on edge devices while still excelling at a variety of tasks. We’ll dive into the Phi-3.5 and Phi-4 model series and demonstrate how to build a practical application using these models. We will also look at reasoning models like OpenAI o1,o3 and DeepSeek R1 and how they work differently as compared to regular LLMs.
Learn more: https://aka.ms/Learn-Collection
[eventID:25000]