PonderNet: Learning to Ponder (Machine Learning Research Paper Explained)

PonderNet: Learning to Ponder (Machine Learning Research Paper Explained)

Subscribers:
291,000
Published on ● Video Link: https://www.youtube.com/watch?v=nQDZmf2Yb9k



Duration: 44:20
22,044 views
751


#pondernet #deepmind #machinelearning

Humans don't spend the same amount of mental effort on all problems equally. Instead, we respond quickly to easy tasks, and we take our time to deliberate hard tasks. DeepMind's PonderNet attempts to achieve the same by dynamically deciding how many computation steps to allocate to any single input sample. This is done via a recurrent architecture and a trainable function that computes a halting probability. The resulting model performs well in dynamic computation tasks and is surprisingly robust to different hyperparameter settings.

OUTLINE:
0:00 - Intro & Overview
2:30 - Problem Statement
8:00 - Probabilistic formulation of dynamic halting
14:40 - Training via unrolling
22:30 - Loss function and regularization of the halting distribution
27:35 - Experimental Results
37:10 - Sensitivity to hyperparameter choice
41:15 - Discussion, Conclusion, Broader Impact

Paper: https://arxiv.org/abs/2107.05407

Abstract:
In standard neural networks the amount of computation used grows with the size of the inputs, but not with the complexity of the problem being learnt. To overcome this limitation we introduce PonderNet, a new algorithm that learns to adapt the amount of computation based on the complexity of the problem at hand. PonderNet learns end-to-end the number of computational steps to achieve an effective compromise between training prediction accuracy, computational cost and generalization. On a complex synthetic problem, PonderNet dramatically improves performance over previous adaptive computation methods and additionally succeeds at extrapolation tests where traditional neural networks fail. Also, our method matched the current state of the art results on a real world question and answering dataset, but using less compute. Finally, PonderNet reached state of the art results on a complex task designed to test the reasoning capabilities of neural networks.1

Authors: Andrea Banino, Jan Balaguer, Charles Blundell

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-09-21Does GPT-3 lie? - Misinformation and fear-mongering around the TruthfulQA dataset
2021-09-20Topographic VAEs learn Equivariant Capsules (Machine Learning Research Paper Explained)
2021-09-16[ML News] Roomba Avoids Poop | Textless NLP | TikTok Algorithm Secrets | New Schmidhuber Blog
2021-09-14Celebrating 100k Subscribers! (w/ Channel Statistics)
2021-09-10[ML News] AI predicts race from X-Ray | Google kills HealthStreams | Boosting Search with MuZero
2021-09-06∞-former: Infinite Memory Transformer (aka Infty-Former / Infinity-Former, Research Paper Explained)
2021-09-03[ML News] Blind Chess AI Competition | Graph NNs for traffic | AI gift suggestions
2021-09-02ALiBi - Train Short, Test Long: Attention with linear biases enables input length extrapolation
2021-08-27[ML News] Stanford HAI coins Foundation Models & High-profile case of plagiarism uncovered
2021-08-26Fastformer: Additive Attention Can Be All You Need (Machine Learning Research Paper Explained)
2021-08-23PonderNet: Learning to Ponder (Machine Learning Research Paper Explained)
2021-08-19NeuralHash is BROKEN - How to evade Apple's detection & craft hash collisions (w/ Open Source Code)
2021-08-18[ML News] Nvidia renders CEO | Jurassic-1 larger than GPT-3 | Tortured Phrases reveal Plagiarism
2021-08-16How Apple scans your phone (and how to evade it) - NeuralHash CSAM Detection Algorithm Explained
2021-08-13[ML NEWS] Apple scans your phone | Master Faces beat face recognition | WALL-E is real
2021-08-06[ML News] AI-generated patent approved | Germany gets an analog to OpenAI | ML cheats video games
2021-08-02[ML News] MMO Game destroys GPUs | OpenAI quits Robotics | Today w/ guest host Sanyam Bhutani
2021-07-15[ML News] Facebook AI adapting robots | Baidu autonomous excavators | Happy Birthday EleutherAI
2021-07-11I'm taking a break
2021-07-08[ML News] GitHub Copilot - Copyright, GPL, Patents & more | Brickit LEGO app | Distill goes on break
2021-07-03Self-driving from VISION ONLY - Tesla's self-driving progress by Andrej Karpathy (Talk Analysis)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
pondernet
deepmind
pondernet learning to ponder
deepmind pondernet
pondernet explained
dynamic computation
deep learning classic algorithms
halting probability
deep learning recurrent computation
dynamic recurrent network
broader impact
deep network learning to stop