Scaling AI Model Training and Inferencing Efficiently with PyTorch

Subscribers:
1,190,000
Published on ● Video Link: https://www.youtube.com/watch?v=85RfazjDPwA



Duration: 18:29
3,512 views
123


Learn more about PyTorch → https://ibm.biz/BdSx57
Learn more about Llama → https://ibm.biz/BdSx53
LLaMa Recipes on Github → https://ibm.biz/BdSx5Q
Foundation Model Stack on GitHub → https://ibm.biz/foundation-model-stack

PyTorch is an open source Python based framework for AI model training and inferencing, but it's still a resource intensive task. In this video, Raghu Kiran Ganti from IBM Research and Suraj Subramanian from Meta discuss how to increase efficiencies in both cost and time for AI workloads using PyTorch features like fully sharded data parallel (FSDP) and TorchDynamo.

Get started for free on IBM Cloud → https://ibm.biz/sign-up-now
Subscribe to see more videos like this in the future → http://ibm.biz/subscribe-now

#PyTorchConf #pytorch #ai







Tags:
IBM Cloud
pytorch
deeplearning
deep learning
machine learning
ML
AI
Artificial Intelligence
fully sharded data parallel
FSDP
GPU
CPU
distributeddataparallel
DDP
infiniband
ethernet
facebook
meta
meta ai
Facebook ai
Eager Mode
Eagermode
PyTorchConf