Open Problems in Mechanistic Interpretability: A Whirlwind Tour

Subscribers:
348,000
Published on ● Video Link: https://www.youtube.com/watch?v=ZSg4-H8L6Ec



Duration: 55:27
1,036 views
31


A Google TechTalk, presented by Neel Nanda, 2023/06/20
Google Algorithms Seminar - ABSTRACT: Mechanistic Interpretability is the study of reverse engineering the learned algorithms in a trained neural network, in the hopes of applying this understanding to make powerful systems safer and more steerable. In this talk Neel will give an overview of the field, summarise some key works, and outline what he sees as the most promising areas of future work and open problems. This will touch on techniques in casual abstraction and meditation analysis, understanding superposition and distributed representations, model editing, and studying individual circuits and neurons.

About the Speaker: Neel works on the mechanistic interpretability team at Google DeepMind. He previously worked with Chris Olah at Anthropic on the transformer circuits agenda, and has done independent work on reverse-engineering modular addition and using this to understand grokking.




Other Videos By Google TechTalks


2023-07-032023 Blockly Developer Summit Day 1-8: Blocks in Docs
2023-07-032023 Blockly Developer Summit Day 2-16: Curriculum Development Panel Discussion
2023-07-032023 Blockly Developer Summit DAY 1-6: Generative Block Programming in MIT App Inventor
2023-07-032023 Blockly Developer Summit Day 2-11: Onboarding New Users
2023-07-032023 Blockly Developer Summit Day 2-6: Code.org - Sprite Lab
2023-07-032023 Blockly Developers Summit Day 1-1: Welcome
2023-07-032023 Blockly Developer Summit Day 2-7: How to Convince Teachers to Teach Coding
2023-07-032023 Blockly Developer Summit Day 2-14: Text to Blocks to Text with Layout
2023-07-032023 Blockly Developer Summit Day 2-8: Active STEM with Unruly Splats
2023-06-29A Constant Factor Prophet Inequality for Online Combinatorial Auctions
2023-06-21Open Problems in Mechanistic Interpretability: A Whirlwind Tour
2023-06-11Online Prediction in Sub-linear Space
2023-06-06Accelerating Transformers via Kernel Density Estimation Insu Han
2023-06-06Differentially Private Synthetic Data via Foundation Model APIs
2023-06-05Foundation Models and Fair Use
2023-05-30Differentially Private Online to Batch
2023-05-30Differentially Private Diffusion Models Generate Useful Synthetic Images
2023-05-30Improving the Privacy Utility Tradeoff in Differentially Private Machine Learning with Public Data
2023-05-30Randomized Approach for Tight Privacy Accounting
2023-05-30Almost Tight Error Bounds on Differentially Private Continual Counting
2023-05-30EIFFeL: Ensuring Integrity for Federated Learning