Topographic VAEs learn Equivariant Capsules (Machine Learning Research Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=pBau7umFhjQ



Duration: 32:04
17,896 views
501


#tvae #topographic #equivariant

Variational Autoencoders model the latent space as a set of independent Gaussian random variables, which the decoder maps to a data distribution. However, this independence is not always desired, for example when dealing with video sequences, we know that successive frames are heavily correlated. Thus, any latent space dealing with such data should reflect this in its structure. Topographic VAEs are a framework for defining correlation structures among the latent variables and induce equivariance within the resulting model. This paper shows how such correlation structures can be built by correctly arranging higher-level variables, which are themselves independent Gaussians.

OUTLINE:
0:00 - Intro
1:40 - Architecture Overview
6:30 - Comparison to regular VAEs
8:35 - Generative Mechanism Formulation
11:45 - Non-Gaussian Latent Space
17:30 - Topographic Product of Student-t
21:15 - Introducing Temporal Coherence
24:50 - Topographic VAE
27:50 - Experimental Results
31:15 - Conclusion & Comments

Paper: https://arxiv.org/abs/2109.01394
Code: https://github.com/akandykeller/topographicvae

Abstract:
In this work we seek to bridge the concepts of topographic organization and equivariance in neural networks. To accomplish this, we introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables. We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST. Furthermore, through topographic organization over time (i.e. temporal coherence), we demonstrate how predefined latent space transformation operators can be encouraged for observed transformed input sequences -- a primitive form of unsupervised learned equivariance. We demonstrate that this model successfully learns sets of approximately equivariant features (i.e. "capsules") directly from sequences and achieves higher likelihood on correspondingly transforming test sequences. Equivariance is verified quantitatively by measuring the approximate commutativity of the inference network and the sequence transformations. Finally, we demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.

Authors: T. Anderson Keller, Max Welling

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-10-21I took a Swiss train and it was awesome! Train Seat Review - SBB InterCity 1 - Geneva to St. Gallen
2021-10-20[ML News] Microsoft trains 530B model | ConvMixer model fits into single tweet | DeepMind profitable
2021-10-07[ML News] DeepMind does Nowcasting | The Guardian's shady reporting | AI finishes Beethoven's 10th
2021-10-06Grokking: Generalization beyond Overfitting on small algorithmic datasets (Paper Explained)
2021-10-02How far can we scale up? Deep Learning's Diminishing Returns (Article Review)
2021-09-29[ML News] Plagiarism Case w/ Plot Twist | CLIP for video surveillance | OpenAI summarizes books
2021-09-27Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment (Paper Explained)
2021-09-26100K Subs AMA (Ask Me Anything)
2021-09-24[ML News] New ImageNet SOTA | Uber's H3 hexagonal coordinate system | New text-image-pair dataset
2021-09-21Does GPT-3 lie? - Misinformation and fear-mongering around the TruthfulQA dataset
2021-09-20Topographic VAEs learn Equivariant Capsules (Machine Learning Research Paper Explained)
2021-09-16[ML News] Roomba Avoids Poop | Textless NLP | TikTok Algorithm Secrets | New Schmidhuber Blog
2021-09-14Celebrating 100k Subscribers! (w/ Channel Statistics)
2021-09-10[ML News] AI predicts race from X-Ray | Google kills HealthStreams | Boosting Search with MuZero
2021-09-06∞-former: Infinite Memory Transformer (aka Infty-Former / Infinity-Former, Research Paper Explained)
2021-09-03[ML News] Blind Chess AI Competition | Graph NNs for traffic | AI gift suggestions
2021-09-02ALiBi - Train Short, Test Long: Attention with linear biases enables input length extrapolation
2021-08-27[ML News] Stanford HAI coins Foundation Models & High-profile case of plagiarism uncovered
2021-08-26Fastformer: Additive Attention Can Be All You Need (Machine Learning Research Paper Explained)
2021-08-23PonderNet: Learning to Ponder (Machine Learning Research Paper Explained)
2021-08-19NeuralHash is BROKEN - How to evade Apple's detection & craft hash collisions (w/ Open Source Code)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
vae
variational
bayesian
variational methods
variational autoencoder
max welling
elbo
prior
student t
reparameterization trick
log likelihood
encoder decoder