DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=qtu0aSTDE2I



Duration: 48:18
26,398 views
1,061


#dreamcoder #programsynthesis #symbolicreasoning

Classic Machine Learning struggles with few-shot generalization for tasks where humans can easily generalize from just a handful of examples, for example sorting a list of numbers. Humans do this by coming up with a short program, or algorithm, that explains the few data points in a compact way. DreamCoder emulates this by using neural guided search over a language of primitives, a library, that it builds up over time. By doing this, it can iteratively construct more and more complex programs by building on its own abstractions and therefore solve more and more difficult tasks in a few-shot manner by generating very short programs that solve the few given datapoints. The resulting system can not only generalize quickly but also delivers an explainable solution to its problems in form of a modular and hierarchical learned library. Combining this with classic Deep Learning for low-level perception is a very promising future direction.

OUTLINE:
0:00 - Intro & Overview
4:55 - DreamCoder System Architecture
9:00 - Wake Phase: Neural Guided Search
19:15 - Abstraction Phase: Extending the Internal Library
24:30 - Dreaming Phase: Training Neural Search on Fictional Programs and Replays
30:55 - Abstraction by Compressing Program Refactorings
32:40 - Experimental Results on LOGO Drawings
39:00 - Ablation Studies
39:50 - Re-Discovering Physical Laws
42:25 - Discovering Recursive Programming Algorithms
44:20 - Conclusions & Discussion

Paper: https://arxiv.org/abs/2006.08381
Code: https://github.com/ellisk42/ec

Abstract:
Expert problem-solving is driven by powerful languages for thinking about problems and their solutions. Acquiring expertise means learning these languages -- systems of concepts, alongside the skills to use them. We present DreamCoder, a system that learns to solve problems by writing programs. It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages. A ``wake-sleep'' learning algorithm alternately extends the language with new symbolic abstractions and trains the neural network on imagined and replayed problems. DreamCoder solves both classic inductive programming tasks and creative tasks such as drawing pictures and building scenes. It rediscovers the basics of modern functional programming, vector algebra and classical physics, including Newton's and Coulomb's laws. Concepts are built compositionally from those learned earlier, yielding multi-layered symbolic representations that are interpretable and transferrable to new tasks, while still growing scalably and flexibly with experience.

Authors: Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sable-Meyer, Luc Cary, Lucas Morales, Luke Hewitt, Armando Solar-Lezama, Joshua B. Tenenbaum


Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-05-15DDPM - Diffusion Models Beat GANs on Image Synthesis (Machine Learning Research Paper Explained)
2021-05-11Research Conference ICML drops their acceptance rate | Area Chairs instructed to be more picky
2021-05-08Involution: Inverting the Inherence of Convolution for Visual Recognition (Research Paper Explained)
2021-05-06MLP-Mixer: An all-MLP Architecture for Vision (Machine Learning Research Paper Explained)
2021-05-04I'm out of Academia
2021-05-01DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)
2021-04-30Why AI is Harder Than We Think (Machine Learning Research Paper Explained)
2021-04-27I COOKED A RECIPE MADE BY A.I. | Cooking with GPT-3 (Don't try this at home)
2021-04-19NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained)
2021-04-14I BUILT A NEURAL NETWORK IN MINECRAFT | Analog Redstone Network w/ Backprop & Optimizer (NO MODS)
2021-04-11DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning
2021-04-07PAIR AI Explorables | Is the problem in the data? Examples on Fairness, Diversity, and Bias.
2021-03-30Machine Learning PhD Survival Guide 2021 | Advice on Topic Selection, Papers, Conferences & more!
2021-03-23Is Google Translate Sexist? Gender Stereotypes in Statistical Machine Translation
2021-03-22Perceiver: General Perception with Iterative Attention (Google DeepMind Research Paper Explained)
2021-03-16Pretrained Transformers as Universal Computation Engines (Machine Learning Research Paper Explained)
2021-03-11Yann LeCun - Self-Supervised Learning: The Dark Matter of Intelligence (FAIR Blog Post Explained)
2021-03-06Apple or iPod??? Easy Fix for Adversarial Textual Attacks on OpenAI's CLIP Model! #Shorts
2021-03-05Multimodal Neurons in Artificial Neural Networks (w/ OpenAI Microscope, Research Paper Explained)
2021-02-27GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-26Linear Transformers Are Secretly Fast Weight Memory Systems (Machine Learning Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
artificial intelligence
wake sleep algorithm
program synthesis
ai program synthesis
program synthesis deep learning
dreamcoder
dream coder
mit dream coder
bayesian program search
neural guided search
learning to sort a list
neural networks learn sorting
deep learning physical laws
deep learning symbolic reasoning
symbolic machine learning
symbolic artificial intelligence
deep learning tutorial