Predictive Coding Approximates Backprop along Arbitrary Computation Graphs (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=LB4B5FYvtdI



Category:
Vlog
Duration: 48:25
24,444 views
932


#ai #biology #neuroscience

Backpropagation is the workhorse of modern deep learning and a core component of most frameworks, but it has long been known that it is not biologically plausible, driving a divide between neuroscience and machine learning. This paper shows that Predictive Coding, a much more biologically plausible algorithm, can approximate Backpropagation for any computation graph, which they verify experimentally by building and training CNNs and LSTMs using Predictive Coding. This suggests that the brain and deep neural networks could be much more similar than previously believed.

OUTLINE:
0:00 - Intro & Overview
3:00 - Backpropagation & Biology
7:40 - Experimental Results
8:40 - Predictive Coding
29:00 - Pseudocode
32:10 - Predictive Coding approximates Backprop
35:00 - Hebbian Updates
36:35 - Code Walkthrough
46:30 - Conclusion & Comments

Paper: https://arxiv.org/abs/2006.04182
Code: https://github.com/BerenMillidge/PredictiveCodingBackprop

Abstract:
Backpropagation of error (backprop) is a powerful algorithm for training machine learning architectures through end-to-end differentiation. However, backprop is often criticised for lacking biological plausibility. Recently, it has been shown that backprop in multilayer-perceptrons (MLPs) can be approximated using predictive coding, a biologically-plausible process theory of cortical computation which relies only on local and Hebbian updates. The power of backprop, however, lies not in its instantiation in MLPs, but rather in the concept of automatic differentiation which allows for the optimisation of any differentiable program expressed as a computation graph. Here, we demonstrate that predictive coding converges asymptotically (and in practice rapidly) to exact backprop gradients on arbitrary computation graphs using only local learning rules. We apply this result to develop a straightforward strategy to translate core machine learning architectures into their predictive coding equivalents. We construct predictive coding CNNs, RNNs, and the more complex LSTMs, which include a non-layer-like branching internal graph structure and multiplicative interactions. Our models perform equivalently to backprop on challenging machine learning benchmarks, while utilising only local and (mostly) Hebbian plasticity. Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry, and may also contribute to the development of completely distributed neuromorphic architectures.

Authors: Beren Millidge, Alexander Tschantz, Christopher L. Buckley

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-01-29SingularityNET - A Decentralized, Open Market and Network for AIs (Whitepaper Explained)
2021-01-22Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
2021-01-17STOCHASTIC MEME DESCENT - Deep Learning Meme Review - Episode 2 (Part 2 of 2)
2021-01-12OpenAI CLIP: ConnectingText and Images (Paper Explained)
2021-01-06OpenAI DALL·E: Creating Images from Text (Blog Post Explained)
2020-12-26Extracting Training Data from Large Language Models (Paper Explained)
2020-12-24MEMES IS ALL YOU NEED - Deep Learning Meme Review - Episode 2 (Part 1 of 2)
2020-12-16ReBeL - Combining Deep Reinforcement Learning and Search for Imperfect-Information Games (Explained)
2020-12-132M All-In into $5 Pot! WWYD? Daniel Negreanu's No-Limit Hold'em Challenge! (Poker Hand Analysis)
2020-12-01DeepMind's AlphaFold 2 Explained! AI Breakthrough in Protein Folding! What we know (& what we don't)
2020-11-29Predictive Coding Approximates Backprop along Arbitrary Computation Graphs (Paper Explained)
2020-11-22Fourier Neural Operator for Parametric Partial Differential Equations (Paper Explained)
2020-11-15[News] Soccer AI FAILS and mixes up ball and referee's bald head.
2020-11-10Underspecification Presents Challenges for Credibility in Modern Machine Learning (Paper Explained)
2020-11-02Language Models are Open Knowledge Graphs (Paper Explained)
2020-10-26Rethinking Attention with Performers (Paper Explained)
2020-10-17LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained)
2020-10-11Descending through a Crowded Valley -- Benchmarking Deep Learning Optimizers (Paper Explained)
2020-10-04An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)
2020-10-03Training more effective learned optimizers, and using them to train themselves (Paper Explained)
2020-09-18The Hardware Lottery (Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
backpropagation
computation
autograph
tensorflow
pytorch
torch
autodiff
differentiation
backprop
biologically plausible
neurons
error signal
predictive coding
variational
gaussian
iterative
local updates
distributed
inner loop
brain
neuroscience
deep neural networks
analyzed
hand drawing
cnn
rnn
lstm
convolutional neural network
recurrent neural network
hebian