Run Inference with a Model from Hugging Face Hub on an Intel® Gaudi™ AI Accelerator | Intel Software

Subscribers:
256,000
Published on ● Video Link: https://www.youtube.com/watch?v=ibpsVjwxyCo



Duration: 0:00
41,827 views
118


Intel® Gaudi™ AI Accelerators are built from the ground-up to accelerate AI training and inference. There are a few ways to get started running on Intel Gaudi software, one of which is to use Optimum Habana, an open source software package developed by Intel and Hugging Face. This tutorial is available to download and try yourself, following along with the video. It uses a multi-modal model that processes image and text, so the application allows you to ask questions about an image. Learn how to get started using a free AI Accelerator instance on Intel® Tiber™ AI Cloud, and what is required to use this model from the Hugging Face hub.

Resources:
Intel Gaudi tutorials rehttps://github.com/HabanaAI/Gaudi-tutorialst...
Intel Tiber AI Clohttps://cloud.intel.com/.com
Intel AI Softwahttps://developer.intel.com/aim/ai

About Intel Software:
Intel® Developer Zone is committed to empowering and assisting software developers in creating applications for Intel hardware and software products. The Intel Software YouTube channel is an excellent resource for those seeking to enhance their knowledge. Our channel provides the latest news, helpful tips, and engaging product demos from Intel and our numerous industry partners. Our videos cover various topics; you can explore them further by following the links.

Connect with Intel Software:
INTEL SOFTWARE WEBShttps://intel.ly/2KeP1hDeP1hD
INTEL SOFTWARE on FACEBhttp://bit.ly/2z8MPFF8MPFF
INTEL SOFTWARE on TWIThttp://bit.ly/2zahGSnahGSn
INTEL SOFTWARE GIThttp://bit.ly/2zaih6zaih6z
INTEL DEVELOPER ZONE LINKEhttp://bit.ly/2z979qs979qs
INTEL DEVELOPER ZONE INSTAGhttp://bit.ly/2z9Xsby9Xsby
INTEL GAME DEV TWIhttp://bit.ly/2BkNshukNshu
INTEL DEVHUB DISChttps://discord.gg/9dSTMfa9NKord  

#intelsoftware
Run Inference with a Model from Hugging Face Hub on an Intel® Gaudi™ AI Accelerator | Intel Software




Other Videos By Intel Software


2025-05-28PyTorch Export Quantization with Intel GPUs | Intel Software
2025-05-23Unlocking Gen AI: From Experimentation to Production with Red Hat & Intel | Intel Software
2025-05-23Overcoming Deployment Challenges: Scaling AI in Edge Computing w/ Red Hat AI & Intel Edge Platforms
2025-05-23Discover AI Innovations at Red Hat Summit with Intel: RHEL AI, OpenShift AI & Edge AI
2025-05-22Explore OpenVINO Model Hub – Instantly Compare AI Model Performance Across Devices | AI with Guy
2025-05-22Build a RAG Chatbot with OPEA on AWS | AI with Guy | Intel Software
2025-05-20Enterprise AI Inference with Intel: Bill Pearson on Infrastructure & Standards | Intel Software
2025-05-20Automatically Quantize LLMs with AutoRound | Intel Software
2025-05-13Deploy Compiled PyTorch Models on Intel GPUs with AOTInductor | Intel Software
2025-04-21Faster GenAI, Visual AI, Edge to Cloud, and HPC Solutions | oneAPI & AI Tools 2025.1
2025-04-16Run Inference with a Model from Hugging Face Hub on an Intel® Gaudi™ AI Accelerator | Intel Software
2025-03-28OpenVINO Notebook on Intel Tiber AI Cloud in 2 Minutes | AI with Guy | Intel Software
2025-03-28AI Agents using OpenVINO and LangChain ReAct | AI with Guy | Intel Software
2025-03-21AI PC: Achieving Success at Scale with Windows Copilot + Experiences | Intel AI DevSummit
2025-03-19OPEA (Open Platform for Enterprise AI) Chat Q&A Example | AI with Guy | Intel Software
2025-03-19OPEA (Open Platform for Enterprise AI) micro-services | AI with Guy | Intel Software
2025-03-18OPEA (Open Platform for Enterprise AI) Introduction | AI with Guy | Intel Software
2025-03-17AI Methods for Understanding Implicit Structures in Medical Records | Lightning Talk
2025-03-17Build RAG Apps in YAML | Intel AI DevSummit 2025 | Intel Software
2025-03-17Building Agentic LLM Workflows with AutoGen | Intel AI DevSummit 2025 | Intel Software
2025-03-17Building private LLM-powered second brain on Intel CPU | Intel AI DevSummit 2025 | Intel Software