Set Distribution Networks: a Generative Model for Sets of Images (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=V79rRI05Lj4



Duration: 59:18
5,170 views
154


We've become very good at making generative models for images and classes of images, but not yet of sets of images, especially when the number of sets is unknown and can contain sets that have never been encountered during training. This paper builds a probabilistic framework and a practical implementation of a generative model for sets of images based on variational methods.

OUTLINE:
0:00 - Intro & Overview
1:25 - Problem Statement
8:05 - Architecture Overview
20:05 - Probabilistic Model
33:50 - Likelihood Function
40:30 - Model Architectures
44:20 - Loss Function & Optimization
47:30 - Results
58:45 - Conclusion

Paper: https://arxiv.org/abs/2006.10705

Abstract:
Images with shared characteristics naturally form sets. For example, in a face verification benchmark, images of the same identity form sets. For generative models, the standard way of dealing with sets is to represent each as a one hot vector, and learn a conditional generative model p(x|y). This representation assumes that the number of sets is limited and known, such that the distribution over sets reduces to a simple multinomial distribution. In contrast, we study a more generic problem where the number of sets is large and unknown. We introduce Set Distribution Networks (SDNs), a novel framework that learns to autoencode and freely generate sets. We achieve this by jointly learning a set encoder, set discriminator, set generator, and set prior. We show that SDNs are able to reconstruct image sets that preserve salient attributes of the inputs in our benchmark datasets, and are also able to generate novel objects/identities. We examine the sets generated by SDN with a pre-trained 3D reconstruction network and a face verification network, respectively, as a novel way to evaluate the quality of generated sets of images.

Authors: Shuangfei Zhai, Walter Talbott, Miguel Angel Bautista, Carlos Guestrin, Josh M. Susskind

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher




Other Videos By Yannic Kilcher


2020-07-09NVAE: A Deep Hierarchical Variational Autoencoder (Paper Explained)
2020-07-08Addendum for Supermasks in Superposition: A Closer Look (Paper Explained)
2020-07-07SupSup: Supermasks in Superposition (Paper Explained)
2020-07-06[Live Machine Learning Research] Plain Self-Ensembles (I actually DISCOVER SOMETHING) - Part 1
2020-07-05SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization (Paper Explained)
2020-07-04Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (Paper Explained)
2020-07-03On the Measure of Intelligence by François Chollet - Part 4: The ARC Challenge (Paper Explained)
2020-07-02BERTology Meets Biology: Interpreting Attention in Protein Language Models (Paper Explained)
2020-07-01GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding (Paper Explained)
2020-06-30Object-Centric Learning with Slot Attention (Paper Explained)
2020-06-29Set Distribution Networks: a Generative Model for Sets of Images (Paper Explained)
2020-06-28Context R-CNN: Long Term Temporal Context for Per-Camera Object Detection (Paper Explained)
2020-06-27Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures (Paper Explained)
2020-06-26On the Measure of Intelligence by François Chollet - Part 3: The Math (Paper Explained)
2020-06-25Discovering Symbolic Models from Deep Learning with Inductive Biases (Paper Explained)
2020-06-24How I Read a Paper: Facebook's DETR (Video Tutorial)
2020-06-23RepNet: Counting Out Time - Class Agnostic Video Repetition Counting in the Wild (Paper Explained)
2020-06-22[Drama] Yann LeCun against Twitter on Dataset Bias
2020-06-21SIREN: Implicit Neural Representations with Periodic Activation Functions (Paper Explained)
2020-06-20Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
2020-06-19On the Measure of Intelligence by François Chollet - Part 2: Human Priors (Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
sets
images
cnn
convolutional neural network
gan
generator
encoder
discriminator
prior
mean
made
latent
binary
conditional
noise
distribution
probability
energy-based
energy
apple
research
sdn
variational
elbo