Self-classifying MNIST Digits (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=EbHUU-gLyRA



Duration: 30:31
12,728 views
450


#ai #biology #machinelearning

Neural Cellular Automata are models for how living creatures can use local message passing to reach global consensus without a central authority. This paper teaches pixels of an image to communicate with each other and figure out as a group which digit they represent. On the way, the authors have to deal with pesky side-effects that come from applying the Cross-Entropy Loss in combination with a Softmax layer, but ultimately achieve a self-sustaining, stable and continuous algorithm that models living systems.

OUTLINE:
0:00 - Intro & Overview
3:10 - Neural Cellular Automata
7:30 - Global Agreement via Message-Passing
11:05 - Neural CAs as Recurrent Convolutions
14:30 - Training Continuously Alive Systems
17:30 - Problems with Cross-Entropy
26:10 - Out-of-Distribution Robustness
27:10 - Chimeric Digits
27:45 - Visualizing Latent State Dimensions
29:05 - Conclusion & Comments

Paper: https://distill.pub/2020/selforg/mnist/

My Video on Neural CAs: https://youtu.be/9Kec_7WFyp0

Abstract:
Growing Neural Cellular Automata [1] demonstrated how simple cellular automata (CAs) can learn to self-organise into complex shapes while being resistant to perturbations. Such a computational model approximates a solution to an open question in biology, namely, how do cells cooperate to create a complex multicellular anatomy and work to regenerate it upon damage? The model parameterizing the cells’ rules is parameter-efficient, end-to-end differentiable, and illustrates a new approach to modeling the regulation of anatomical homeostasis. In this work, we use a version of this model to show how CAs can be applied to a common task in machine learning: classification. We pose the question: can CAs use local message passing to achieve global agreement on what digit they compose?

Authors: Ettore Randazzo, Alexander Mordvintsev, Eyvind Niklasson, Michael Levin, Sam Greydanus

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2020-11-10Underspecification Presents Challenges for Credibility in Modern Machine Learning (Paper Explained)
2020-11-02Language Models are Open Knowledge Graphs (Paper Explained)
2020-10-26Rethinking Attention with Performers (Paper Explained)
2020-10-17LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained)
2020-10-11Descending through a Crowded Valley -- Benchmarking Deep Learning Optimizers (Paper Explained)
2020-10-04An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)
2020-10-03Training more effective learned optimizers, and using them to train themselves (Paper Explained)
2020-09-18The Hardware Lottery (Paper Explained)
2020-09-13Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)
2020-09-07Learning to summarize from human feedback (Paper Explained)
2020-09-02Self-classifying MNIST Digits (Paper Explained)
2020-08-28Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
2020-08-26Radioactive data: tracing through training (Paper Explained)
2020-08-23Fast reinforcement learning with generalized policy updates (Paper Explained)
2020-08-20What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study (Paper Explained)
2020-08-18[Rant] REVIEWER #2: How Peer Review is FAILING in Machine Learning
2020-08-14REALM: Retrieval-Augmented Language Model Pre-Training (Paper Explained)
2020-08-12Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
2020-08-09Hopfield Networks is All You Need (Paper Explained)
2020-08-06I TRAINED AN AI TO SOLVE 2+2 (w/ Live Coding)
2020-08-04PCGRL: Procedural Content Generation via Reinforcement Learning (Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
biology
biological
alive
living
message passing
global state
local state
information
cellular automata
neural cellular automata
neural ca
convolution
recurrent
rnn
pixels
cell state
latent state
distill
distill pub
mnist
neural network
digit classification