Supercharging AI with high performance distributed computing

Published on ● Video Link: https://www.youtube.com/watch?v=JvssZESVcjI



Duration: 5:14
690 views
51


5-min ML Paper Challenge
Presenter: https://www.linkedin.com/in/nachoruiz/

Image Classification at Supercomputer Scale
https://arxiv.org/abs/1811.06992

Deep learning is extremely computationally intensive, and hardware vendors have responded by building faster accelerators in large clusters. Training deep learning models at petaFLOPS scale requires overcoming both algorithmic and systems software challenges. In this paper, we discuss three systems-related optimizations: (1) distributed batch normalization to control per-replica batch sizes, (2) input pipeline optimizations to sustain model throughput, and (3) 2-D torus all-reduce to speed up gradient summation. We combine these optimizations to train ResNet-50 on ImageNet to 76.3% accuracy in 2.2 minutes on a 1024-chip TPU v3 Pod with a training throughput of over 1.05 million images/second and no accuracy drop.




Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE


2019-05-09Content Tree Word Embedding for document representation | AISC
2019-05-06Deep Temporal Logistic Bag-of-Features For Forecasting High Frequency Limit Order Book Time Series
2019-05-02A Web-scale system for scientific knowledge exploration | AISC
2019-05-02Convolutional Neural Networks for processing EEG signals
2019-05-02Classification of sentiment reviews using n-gram machine learning approach
2019-05-02Introduction to the Conditional GAN - A General Framework for Pixel2Pixel Translation
2019-05-02A Style-Based Generator Architecture for Generative Adversarial Networks
2019-05-02A Framework for Developing Deep Learning Classification Models
2019-05-02Revolutionizing Diet and Health with CNN's and the Microbiome
2019-05-02Efficient implementation of a neural network on hardware using compression techniques
2019-05-02Supercharging AI with high performance distributed computing
2019-05-02Combining Satellite Imagery and machine learning to predict poverty
2019-05-02Revolutionary Deep Learning Method to Denoise EEG Brainwaves
2019-04-25[LISA] Linguistically-Informed Self-Attention for Semantic Role Labeling | AISC
2019-04-23How goodness metrics lead to undesired recommendations
2019-04-22Deep Neural Networks for YouTube Recommendation | AISC Foundational
2019-04-18[Phoenics] A Bayesian Optimizer for Chemistry | AISC Author Speaking
2019-04-18Why do large batch sized trainings perform poorly in SGD? - Generalization Gap Explained | AISC
2019-04-16Structured Neural Summarization | AISC Lunch & Learn
2019-04-11Deep InfoMax: Learning deep representations by mutual information estimation and maximization | AISC
2019-04-08ACT: Adaptive Computation Time for Recurrent Neural Networks | AISC



Tags:
deep learning
machine learning
Supercomputer Scale
Tensor Processing Units
Mixed precision
Distributed Batch Normalization
Input Pipeline Optimization
2-D Gradient Summation