Efficient implementation of a neural network on hardware using compression techniques

Published on ● Video Link: https://www.youtube.com/watch?v=Pc8Ma95YKHc



Duration: 5:14
397 views
11


5-min ML Paper Challenge

EIE: Efficient Inference Engine on Compressed Deep Neural Network
https://arxiv.org/pdf/1602.01528.pdf

Abstract—State-of-the-art deep neural networks (DNNs)
have hundreds of millions of connections and are both computationally
and memory intensive, making them difficult to deploy
on embedded systems with limited hardware resources and
power budgets. While custom hardware helps the computation,
fetching weights from DRAM is two orders of magnitude more
expensive than ALU operations, and dominates the required
power.
Previously proposed ‘Deep Compression’ makes it possible
to fit large DNNs (AlexNet and VGGNet) fully in on-chip
SRAM. This compression is achieved by pruning the redundant
connections and having multiple connections share the same
weight. We propose an energy efficient inference engine (EIE)
that performs inference on this compressed network model and
accelerates the resulting sparse matrix-vector multiplication
with weight sharing. Going from DRAM to SRAM gives EIE
120× energy saving; Exploiting sparsity saves 10×; Weight
sharing gives 8×; Skipping zero activations from ReLU saves
another 3×. Evaluated on nine DNN benchmarks, EIE is
189× and 13× faster when compared to CPU and GPU
implementations of the same DNN without compression. EIE
has a processing power of 102 GOPS/s working directly on
a compressed network, corresponding to 3 TOPS/s on an
uncompressed network, and processes FC layers of AlexNet at
1.88×104
frames/sec with a power dissipation of only 600mW.
It is 24,000× and 3,400× more energy efficient than a CPU
and GPU respectively. Compared with DaDianNao, EIE has
2.9×, 19× and 3× better throughput, energy efficiency and
area efficiency.




Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE


2019-05-13Sparse Transformers and MuseNet | AISC
2019-05-09Content Tree Word Embedding for document representation | AISC
2019-05-06Deep Temporal Logistic Bag-of-Features For Forecasting High Frequency Limit Order Book Time Series
2019-05-02A Web-scale system for scientific knowledge exploration | AISC
2019-05-02Convolutional Neural Networks for processing EEG signals
2019-05-02Classification of sentiment reviews using n-gram machine learning approach
2019-05-02Introduction to the Conditional GAN - A General Framework for Pixel2Pixel Translation
2019-05-02A Style-Based Generator Architecture for Generative Adversarial Networks
2019-05-02A Framework for Developing Deep Learning Classification Models
2019-05-02Revolutionizing Diet and Health with CNN's and the Microbiome
2019-05-02Efficient implementation of a neural network on hardware using compression techniques
2019-05-02Supercharging AI with high performance distributed computing
2019-05-02Combining Satellite Imagery and machine learning to predict poverty
2019-05-02Revolutionary Deep Learning Method to Denoise EEG Brainwaves
2019-04-25[LISA] Linguistically-Informed Self-Attention for Semantic Role Labeling | AISC
2019-04-23How goodness metrics lead to undesired recommendations
2019-04-22Deep Neural Networks for YouTube Recommendation | AISC Foundational
2019-04-18[Phoenics] A Bayesian Optimizer for Chemistry | AISC Author Speaking
2019-04-18Why do large batch sized trainings perform poorly in SGD? - Generalization Gap Explained | AISC
2019-04-16Structured Neural Summarization | AISC Lunch & Learn
2019-04-11Deep InfoMax: Learning deep representations by mutual information estimation and maximization | AISC



Tags:
deep learning
machine learning
Deep learning
Model Compression
Hardware Acceleration
Algorithm-Hardware co-design