NVAE: A Deep Hierarchical Variational Autoencoder (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=x6T1zMSE4Ts



Duration: 34:12
31,495 views
1,012


VAEs have been traditionally hard to train at high resolutions and unstable when going deep with many layers. In addition, VAE samples are often more blurry and less crisp than those from GANs. This paper details all the engineering choices necessary to successfully train a deep hierarchical VAE that exhibits global consistency and astounding sharpness at high resolutions.

OUTLINE:
0:00 - Intro & Overview
1:55 - Variational Autoencoders
8:25 - Hierarchical VAE Decoder
12:45 - Output Samples
15:00 - Hierarchical VAE Encoder
17:20 - Engineering Decisions
22:10 - KL from Deltas
26:40 - Experimental Results
28:40 - Appendix
33:00 - Conclusion

Paper: https://arxiv.org/abs/2007.03898

Abstract:
Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ as shown in Fig. 1. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256×256 pixels.

Authors: Arash Vahdat, Jan Kautz

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher




Other Videos By Yannic Kilcher


2020-07-29Self-training with Noisy Student improves ImageNet classification (Paper Explained)
2020-07-26[Classic] Playing Atari with Deep Reinforcement Learning (Paper Explained)
2020-07-23[Classic] ImageNet Classification with Deep Convolutional Neural Networks (Paper Explained)
2020-07-21Neural Architecture Search without Training (Paper Explained)
2020-07-19[Classic] Generative Adversarial Networks (Paper Explained)
2020-07-16[Classic] Word2Vec: Distributed Representations of Words and Phrases and their Compositionality
2020-07-14[Classic] Deep Residual Learning for Image Recognition (Paper Explained)
2020-07-12I'M TAKING A BREAK... (Channel Update July 2020)
2020-07-11Deep Ensembles: A Loss Landscape Perspective (Paper Explained)
2020-07-10Gradient Origin Networks (Paper Explained w/ Live Coding)
2020-07-09NVAE: A Deep Hierarchical Variational Autoencoder (Paper Explained)
2020-07-08Addendum for Supermasks in Superposition: A Closer Look (Paper Explained)
2020-07-07SupSup: Supermasks in Superposition (Paper Explained)
2020-07-06[Live Machine Learning Research] Plain Self-Ensembles (I actually DISCOVER SOMETHING) - Part 1
2020-07-05SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization (Paper Explained)
2020-07-04Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (Paper Explained)
2020-07-03On the Measure of Intelligence by François Chollet - Part 4: The ARC Challenge (Paper Explained)
2020-07-02BERTology Meets Biology: Interpreting Attention in Protein Language Models (Paper Explained)
2020-07-01GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding (Paper Explained)
2020-06-30Object-Centric Learning with Slot Attention (Paper Explained)
2020-06-29Set Distribution Networks: a Generative Model for Sets of Images (Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
gan
vae
kl
elbo
autoencoder
variational
latent
sampling
hierarchical
scales
faces
mnist
cifar10
swish
batch norm
generative
nvidia
mixed precision
memory
deep
layers
depthwise convolutions
cnn
convolutional
generation
generative model