Leveraging large-scale models for efficient learning

Subscribers:
2,520,000
Published on ● Video Link: https://www.youtube.com/watch?v=27PKr0T3gXM



Duration: 0:00
198 views
12


Struggling to improve your model's performance without increasing its size, even after exhaustive data scraping and architectural tweaks? If you have access to a larger, well-performing model and the resources to train it, knowledge distillation could be the answer. This technique leverages a strong model to guide the training of a smaller one, leading to improved learning and faster convergence. This presentation offers a brief overview of knowledge distillation and its key benefits.

Subscribe to Google for Developers → https://goo.gle/developers

Speakers: Morgane Rivière
Products Mentioned: Gemma