Pushing boundaries of complex reasoning in small language models
Mojan Javaheripi, Member of Technical Staff at Microsoft Research AI Frontiers, presents Phi-4-Reasoning and Phi-4-Reasoning-Plus—two 14B models designed to advance complex reasoning in small-scale language models. By introducing a dedicated “thinking block” and applying supervised fine-tuning and reinforcement learning on carefully curated STEM datasets, these models achieve major improvements in problem-solving capabilities.
Phi-4-Reasoning: https://huggingface.co/microsoft/Phi-4-reasoning
Phi-4-Reasoning-Plus: https://huggingface.co/microsoft/Phi-4-reasoning-plus
Phi-4 Reasoning paper (PDF): https://aka.ms/phi4reasoningPDF
Azure AI Foundry Model Catalog: https://aka.ms/AIFoundryModelCatalog
Microsoft/phi-4-gguf on Hugging Face: https://huggingface.co/microsoft/phi-4-gguf
This session aired on September 24, 2025, at Microsoft Research Forum, Season 2 Episode 1.
Register for the series to learn about future episodes: https://aka.ms/registerresearchforumYTs2e1
Continue watching this episode: https://aka.ms/researchforumYTs2e1
Explore all previous episodes: https://aka.ms/researchforumYTplaylist