Noether Networks: Meta-Learning Useful Conserved Quantities (w/ the authors)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=Xp3jR-ttMfo



Duration: 1:09:05
11,281 views
308


#deeplearning #noether #symmetries

This video includes an interview with first author Ferran Alet!
Encoding inductive biases has been a long established methods to provide deep networks with the ability to learn from less data. Especially useful are encodings of symmetry properties of the data, such as the convolution's translation invariance. But such symmetries are often hard to program explicitly, and can only be encoded exactly when done in a direct fashion. Noether Networks use Noether's theorem connecting symmetries to conserved quantities and are able to dynamically and approximately enforce symmetry properties upon deep neural networks.

OUTLINE:
0:00 - Intro & Overview
18:10 - Interview Start
21:20 - Symmetry priors vs conserved quantities
23:25 - Example: Pendulum
27:45 - Noether Network Model Overview
35:35 - Optimizing the Noether Loss
41:00 - Is the computation graph stable?
46:30 - Increasing the inference time computation
48:45 - Why dynamically modify the model?
55:30 - Experimental Results & Discussion

Paper: https://arxiv.org/abs/2112.03321
Website: https://dylandoblar.github.io/noether-networks/
Code: https://github.com/dylandoblar/noether-networks

Abstract:
Progress in machine learning (ML) stems from a combination of data availability, computational resources, and an appropriate encoding of inductive biases. Useful biases often exploit symmetries in the prediction problem, such as convolutional networks relying on translation equivariance. Automatically discovering these useful symmetries holds the potential to greatly improve the performance of ML systems, but still remains a challenge. In this work, we focus on sequential prediction problems and take inspiration from Noether's theorem to reduce the problem of finding inductive biases to meta-learning useful conserved quantities. We propose Noether Networks: a new type of architecture where a meta-learned conservation loss is optimized inside the prediction function. We show, theoretically and experimentally, that Noether Networks improve prediction quality, providing a general framework for discovering inductive biases in sequential problems.

Authors: Ferran Alet, Dylan Doblar, Allan Zhou, Joshua Tenenbaum, Kenji Kawaguchi, Chelsea Finn

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/2017636191

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2022-02-15HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning (w/ Author)
2022-02-10[ML News] DeepMind AlphaCode | OpenAI math prover | Meta battles harmful content with AI
2022-02-08Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents (+Author)
2022-02-07OpenAI Embeddings (and Controversy?!)
2022-02-06Unsupervised Brain Models - How does Deep Learning inform Neuroscience? (w/ Patrick Mineault)
2022-02-04GPT-NeoX-20B - Open-Source huge language model by EleutherAI (Interview w/ co-founder Connor Leahy)
2022-01-29Predicting the rules behind - Deep Symbolic Regression for Recurrent Sequences (w/ author interview)
2022-01-27IT ARRIVED! YouTube sent me a package. (also: Limited Time Merch Deal)
2022-01-25[ML News] ConvNeXt: Convolutions return | China regulates algorithms | Saliency cropping examined
2022-01-21Dynamic Inference with Neural Interpreters (w/ author interview)
2022-01-19Noether Networks: Meta-Learning Useful Conserved Quantities (w/ the authors)
2022-01-11This Team won the Minecraft RL BASALT Challenge! (Paper Explanation & Interview with the authors)
2022-01-05Full Self-Driving is HARD! Analyzing Elon Musk re: Tesla Autopilot on Lex Fridman's Podcast
2022-01-02Player of Games: All the games, one algorithm! (w/ author Martin Schmid)
2021-12-30ML News Live! (Dec 30, 2021) Anonymous user RIPS Tensorflw | AI prosecutors rising | Penny Challenge
2021-12-28GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
2021-12-27Machine Learning Holidays Live Stream
2021-12-26Machine Learning Holiday Live Stream
2021-12-24[ML News] AI learns to search the Internet | Drawings come to life | New ML journal launches
2021-12-21[ML News] DeepMind builds Gopher | Google builds GLaM | Suicide capsule uses AI to check access
2021-11-27Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions (Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
noether networks
noether's theroem
noether theorem
symmetries
neural network bias
neural network symmetries
inductive biases
conserved quantities
pendulum
neural network physics
deep learning physics
deep learning symmetries
group convolutions
with the authors
paper explained
deep learning prediction
test time optimization
tailoring
neural network tailoring