Dynamic Routing Between Capsules

Subscribers:
286,000
Published on ● Video Link: https://www.youtube.com/watch?v=nXGHJTtFYRU



Duration: 42:07
8,724 views
303


Geoff Hinton's next big idea! Capsule Networks are an alternative way of implementing neural networks by dividing each layer into capsules. Each capsule is responsible for detecting the presence and properties of one particular entity in the input sample. This information is then allocated dynamically to higher-level capsules in a novel and unconventional routing scheme. While Capsule Networks are still in their infancy, they are an exciting and promising new direction.

Abstract:
A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.

Authors: Sara Sabour, Nicholas Frosst, Geoffrey E Hinton

https://arxiv.org/abs/1710.09829


YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Minds: https://www.minds.com/ykilcher
BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/




Other Videos By Yannic Kilcher


2019-11-21MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
2019-11-07A neurally plausible model learns successor representations in partially observable environments
2019-11-03SinGAN: Learning a Generative Model from a Single Natural Image
2019-11-02AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning
2019-11-01IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
2019-10-31The Visual Task Adaptation Benchmark
2019-10-15LeDeepChef ๐Ÿ‘จโ€๐Ÿณ Deep Reinforcement Learning Agent for Families of Text-Based Games
2019-10-14[News] The Siraj Raval Controversy
2019-10-07Accelerating Deep Learning by Focusing on the Biggest Losers
2019-09-05DEEP LEARNING MEME REVIEW - Episode 1
2019-09-04Dynamic Routing Between Capsules
2019-09-03RoBERTa: A Robustly Optimized BERT Pretraining Approach
2019-08-28Auditing Radicalization Pathways on YouTube
2019-08-13Gauge Equivariant Convolutional Networks and the Icosahedral CNN
2019-08-12Processing Megapixel Images with Deep Attention-Sampling Models
2019-08-09Manifold Mixup: Better Representations by Interpolating Hidden States
2019-08-08Learning World Graphs to Accelerate Hierarchical Reinforcement Learning
2019-08-05Reconciling modern machine learning and the bias-variance trade-off
2019-07-05Conversation about Population-Based Methods (Re-upload)
2019-07-03XLNet: Generalized Autoregressive Pretraining for Language Understanding
2019-06-13Talking to companies at ICML19



Tags:
machine learning
deep learning
capsules
capsule networks
google brain
hinton
jeff hinton
geoff hinton
routing
neural networks
convolution
convolutional neural networks
deep neural networks
cnns
mnist
multimnist
disentanglement
architecture
reconstruction
alternative
dnn
ml
ai
artificial intelligence
brain
visual system
classifier
image
nonlinearity
entities
objects
capsule
network