GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=cllFzkvrYmE



Category:
Tutorial
Duration: 1:03:26
43,171 views
1,333


#glom #hinton #capsules

Geoffrey Hinton describes GLOM, a Computer Vision model that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders and RNNs. GLOM decomposes an image into a parse tree of objects and their parts. However, unlike previous systems, the parse tree is constructed dynamically and differently for each input, without changing the underlying neural network. This is done by a multi-step consensus algorithm that runs over different levels of abstraction at each location of an image simultaneously. GLOM is just an idea for now but suggests a radically new approach to AI visual scene understanding.

OUTLINE:
0:00 - Intro & Overview
3:10 - Object Recognition as Parse Trees
5:40 - Capsule Networks
8:00 - GLOM Architecture Overview
13:10 - Top-Down and Bottom-Up communication
18:30 - Emergence of Islands
22:00 - Cross-Column Attention Mechanism
27:10 - My Improvements for the Attention Mechanism
35:25 - Some Design Decisions
43:25 - Training GLOM as a Denoising Autoencoder & Contrastive Learning
52:20 - Coordinate Transformations & Representing Uncertainty
57:05 - How GLOM handles Video
1:01:10 - Conclusion & Comments

Paper: https://arxiv.org/abs/2102.12627

Abstract:
This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language

Authors: Geoffrey Hinton

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-04-14I BUILT A NEURAL NETWORK IN MINECRAFT | Analog Redstone Network w/ Backprop & Optimizer (NO MODS)
2021-04-11DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning
2021-04-07PAIR AI Explorables | Is the problem in the data? Examples on Fairness, Diversity, and Bias.
2021-03-30Machine Learning PhD Survival Guide 2021 | Advice on Topic Selection, Papers, Conferences & more!
2021-03-23Is Google Translate Sexist? Gender Stereotypes in Statistical Machine Translation
2021-03-22Perceiver: General Perception with Iterative Attention (Google DeepMind Research Paper Explained)
2021-03-16Pretrained Transformers as Universal Computation Engines (Machine Learning Research Paper Explained)
2021-03-11Yann LeCun - Self-Supervised Learning: The Dark Matter of Intelligence (FAIR Blog Post Explained)
2021-03-06Apple or iPod??? Easy Fix for Adversarial Textual Attacks on OpenAI's CLIP Model! #Shorts
2021-03-05Multimodal Neurons in Artificial Neural Networks (w/ OpenAI Microscope, Research Paper Explained)
2021-02-27GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-26Linear Transformers Are Secretly Fast Weight Memory Systems (Machine Learning Paper Explained)
2021-02-25DeBERTa: Decoding-enhanced BERT with Disentangled Attention (Machine Learning Paper Explained)
2021-02-19Dreamer v2: Mastering Atari with Discrete World Models (Machine Learning Research Paper Explained)
2021-02-17TransGAN: Two Transformers Can Make One Strong GAN (Machine Learning Research Paper Explained)
2021-02-14NFNets: High-Performance Large-Scale Image Recognition Without Normalization (ML Paper Explained)
2021-02-11Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention (AI Paper Explained)
2021-02-04Deep Networks Are Kernel Machines (Paper Explained)
2021-02-02Feedback Transformers: Addressing Some Limitations of Transformers with Feedback Memory (Explained)
2021-01-29SingularityNET - A Decentralized, Open Market and Network for AIs (Whitepaper Explained)
2021-01-22Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
geoff hinton
geoff hinton capsule networks
geoff hinton neural networks
geoffrey hinton
geoffrey hinton deep learning
geoffrey hinton glom
hinton glom
glom model
deep learning tutorial
introduction to deep learning
capsule networks
computer vision
capsule networks explained
google brain
google ai
schmidhuber
transformer
attention mechanism
consensus algorithm
column