Dirichlet Pruning for Neural Network Compression | AISC

Dirichlet Pruning for Neural Network Compression | AISC

Published on ● Video Link: https://www.youtube.com/watch?v=myAlT938-pY



Duration: 51:57
310 views
13


For slides and more information on the paper, visit https://ai.science/e/dirichlet-pruning-for-neural-network-compression--eiXZPy3A7LvZd9vYXRPt

Speaker: Kamil Adamczewski; Host: Nour Fahmy

Motivation:
We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one. Dirichlet pruning is a form of structured pruning which assigns the Dirichlet distribution over each layer's channels in convolutional layers (or neurons in fully-connected layers), and estimates the parameters of the distribution over these units using variational inference. The learned distribution allows us to remove unimportant units, resulting in a compact architecture containing only crucial features for a task at hand. Our method is extremely fast to train. The number of newly introduced Dirichlet parameters is only linear in the number of channels, which allows for rapid training, requiring as little as one epoch to converge. We perform extensive experiments, in particular on larger architectures such as VGG and WideResNet (45% and 52% compression rate, respectively) where our method achieves the state-of-the-art compression performance and provides interpretable features as a by-product.




Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE


2021-01-15Introduction to NVIDIA NeMo - A Toolkit for Conversational AI | AISC
2021-01-15Explainable Classifiers Using Counterfactual Approach | AISC
2021-01-14Machine learning meets continuous flow chemistry: Automated process optimization | AISC
2021-01-13Screening and analysis of specific language impairment | AISC
2021-01-08High-frequency Component Helps Explain the Generalization of Convolutional Neural Networks | AISC
2021-01-07Locality Guided Neural Networks for Explainable AI | AISC
2021-01-06Explaining image classifiers by removing input features using generative models | AISC
2020-12-24An Introduction to the Quantum Tech Ecosystem | AISC
2020-12-23Explaining by Removing: A Unified Framework for Model Explanation | AISC
2020-12-18The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies
2020-12-18Dirichlet Pruning for Neural Network Compression | AISC
2020-12-17Breaking Speed Limits with Simultaneous Ultra-Fast MRI Reconstruction and Tissue Segmentation | AISC
2020-12-16How to Track Objects in Videos with Self-supervised Techniques | AISC
2020-12-15Practical Transformers - Natural Language Processing | Learning Package Overview
2020-12-11AI for a Sustainable Future: Think Globally, Act Locally! | AISC
2020-12-11Steve Brunton: Machine Learning for Fluid Dynamics
2020-12-10An algorithm for Bayesian optimization for categorical variables informed by physical intuition with
2020-12-09Artificial Intelligence, Ethics and Bias | AISC
2020-12-08Agora: Working Remotely with Ease
2020-12-08GNN-TOX: Graph Neural Nets to Make Drug Discovery Cheaper
2020-12-08Logeo: Automatically Transform 2D Designs to 3D