Grokking: Generalization beyond Overfitting on small algorithmic datasets (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=dND-7llwrpw



Duration: 29:47
68,030 views
0


#grokking #openai #deeplearning

Grokking is a phenomenon when a neural network suddenly learns a pattern in the dataset and jumps from random chance generalization to perfect generalization very suddenly. This paper demonstrates grokking on small algorithmic datasets where a network has to fill in binary tables. Interestingly, the learned latent spaces show an emergence of the underlying binary operations that the data were created with.

OUTLINE:
0:00 - Intro & Overview
1:40 - The Grokking Phenomenon
3:50 - Related: Double Descent
7:50 - Binary Operations Datasets
11:45 - What quantities influence grokking?
15:40 - Learned Emerging Structure
17:35 - The role of smoothness
21:30 - Simple explanations win
24:30 - Why does weight decay encourage simplicity?
26:40 - Appendix
28:55 - Conclusion & Comments

Paper: https://mathai-iclr.github.io/papers/papers/MATHAI_29_paper.pdf

Abstract:
In this paper we propose to study generalization of neural networks on small algorithmically generated datasets. In this setting, questions about data efficiency, memorization, generalization, and speed of learning can be studied in great detail. In some situations we show that neural networks learn through a process of “grokking” a pattern in the data, improving generalization performance from random chance level to perfect generalization, and that this improvement in generalization can happen well past the point of overfitting. We also study generalization as a function of dataset size and find that smaller datasets require increasing amounts of optimization for generalization. We argue that these datasets provide a fertile ground for studying a poorly understood aspect of deep learning: generalization of overparametrized neural networks beyond memorization of the finite training dataset.

Authors: Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin & Vedant Misra

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-11-05[ML News] Google introduces Pathways | OpenAI solves Math Problems | Meta goes First Person
2021-11-03EfficientZero: Mastering Atari Games with Limited Data (Machine Learning Research Paper Explained)
2021-10-31[YTalks] Siraj Raval - Stories about YouTube, Plagiarism, and the Dangers of Fame (Interview)
2021-10-29[ML News] NVIDIA GTC'21 | DeepMind buys MuJoCo | Google predicts spreadsheet formulas
2021-10-29[ML News GERMAN] NVIDIA GTC'21 | DeepMind kauft MuJoCo | Google Lernt Spreadsheet Formeln
2021-10-27I went to an AI Art Festival in Geneva (AiiA Festival Trip Report)
2021-10-24Symbolic Knowledge Distillation: from General Language Models to Commonsense Models (Explained)
2021-10-21I took a Swiss train and it was awesome! Train Seat Review - SBB InterCity 1 - Geneva to St. Gallen
2021-10-20[ML News] Microsoft trains 530B model | ConvMixer model fits into single tweet | DeepMind profitable
2021-10-07[ML News] DeepMind does Nowcasting | The Guardian's shady reporting | AI finishes Beethoven's 10th
2021-10-06Grokking: Generalization beyond Overfitting on small algorithmic datasets (Paper Explained)
2021-10-02How far can we scale up? Deep Learning's Diminishing Returns (Article Review)
2021-09-29[ML News] Plagiarism Case w/ Plot Twist | CLIP for video surveillance | OpenAI summarizes books
2021-09-27Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment (Paper Explained)
2021-09-26100K Subs AMA (Ask Me Anything)
2021-09-24[ML News] New ImageNet SOTA | Uber's H3 hexagonal coordinate system | New text-image-pair dataset
2021-09-21Does GPT-3 lie? - Misinformation and fear-mongering around the TruthfulQA dataset
2021-09-20Topographic VAEs learn Equivariant Capsules (Machine Learning Research Paper Explained)
2021-09-16[ML News] Roomba Avoids Poop | Textless NLP | TikTok Algorithm Secrets | New Schmidhuber Blog
2021-09-14Celebrating 100k Subscribers! (w/ Channel Statistics)
2021-09-10[ML News] AI predicts race from X-Ray | Google kills HealthStreams | Boosting Search with MuZero



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
grokking
openai
double descent
belkin
overfitting
bias variance
steps
training
binary tables
binary operations
binary operation
multiplication table
algorithmic datasets
groups
s5 group
deep learning algorithmic
deep learning generalization
generalization research
why do neural networks generalize