Deploy Compiled PyTorch Models on Intel GPUs with AOTInductor | Intel Software
AOTInductor compiles PyTorch models ahead-of-time, creating a deployable artifact for inference. It uses torch.export to create a static computational graph for the model’s forward pass, then uses TorchInductor to compile the graph and generate a shared library. While this capability has been available for Intel CPUs, support for Intel GPUs was newly introduced in PyTorch 2.7. This simple example shows how to use AOTInductor on Intel GPUs – including the Python code to invoke AOTInductor, a C++ inference example, and a CMakeLists.txt file for configuring the build across platforms.
Resources:
Get started with PyTorch on Intel GPUs: https://pytorch.org/docs/stable/notes/get_start_xpu.html
AOTInductor documentation and example: https://pytorch.org/docs/2.7/torch.compiler_aot_inductor.html
Intel AI software resources: https://developer.intel.com/ai
About Intel Software:
Intel® Developer Zone is committed to empowering and assisting software developers in creating applications for Intel hardware and software products. The Intel Software YouTube channel is an excellent resource for those seeking to enhance their knowledge. Our channel provides the latest news, helpful tips, and engaging product demos from Intel and our numerous industry partners. Our videos cover various topics; you can explore them further by following the links.
Connect with Intel Software:
INTEL SOFTWARE WEBSITE:https://intel.ly/2KeP1hDD
INTEL SOFTWARE on FACEBOOK:http://bit.ly/2z8MPFFF
INTEL SOFTWARE on TWITTER:http://bit.ly/2zahGSnn
INTEL SOFTWARE GITHUB:http://bit.ly/2zaih6zz
INTEL DEVELOPER ZONE LINKEDIN:http://bit.ly/2z979qss
INTEL DEVELOPER ZONE INSTAGRAM:http://bit.ly/2z9Xsbyy
INTEL GAME DEV TWITCH:http://bit.ly/2BkNshuu
#intelsoftware
Deploy Compiled PyTorch Models on Intel GPUs with AOTInductor | Intel Software