Principles of learning in distributed neural networks

Published on ● Video Link: https://www.youtube.com/watch?v=-Sn5uyMRrKI



Duration: 35:15
32 views
1


Andrew Saxe (University College London)
https://simons.berkeley.edu/talks/andrew-saxe-university-college-london-2024-06-06
Understanding Lower-Level Intelligence from AI, Psychology, and Neuroscience Perspectives

The brain is an unparalleled learning machine, yet the principles that govern learning in the brain remain unclear. In this talk I will suggest that depth–the serial propagation of signals–may be a key principle sculpting learning dynamics in the brain and mind. To understand several consequences of depth, I will present mathematical analyses of the nonlinear dynamics of learning in a variety of simple solvable deep network models. Building from this theoretical work, I will turn to rodent systems neuroscience, showing that deep network dynamics can account for individually variable yet systematic transitions in strategy as mice learn a visual detection task over several weeks. Together, these results provide analytic insight into how the statistics of an environment can interact with nonlinear deep learning dynamics to structure evolving neural representations and behavior over learning.







Tags:
Simons Institute
theoretical computer science
UC Berkeley
Computer Science
Theory of Computation
Theory of Computing
Understanding Lower-Level Intelligence from AI; Psychology; and Neuroscience Perspectives
Andrew Saxe