Machine Learning, Deep Networks and Interpretability in Systems, Cognitive and...

Published on ● Video Link: https://www.youtube.com/watch?v=XLJRZ_tPR_Q



Duration: 44:46
637 views
16


Jack Gallant (University of California, Berkeley)
https://simons.berkeley.edu/talks/tbd-435
Interpretable Machine Learning in Natural and Social Sciences

The mammalian brain is an extremely complicated, dynamical deep network. Systems, cognitive and computational neuroscientists seek to understand how information is represented throughout this network, and how these representations are modulated by attention and learning. Machine learning provides many tools useful for analyzing brain data recorded in neuroimaging, neurophysiology and optical imaging experiments. For example, deep neural networks trained to perform complex tasks can be used as a source of features for data analysis, or they can be trained directly to model complex data sets. Although artificial deep networks can produce complex models that accurately predict brain responses under complex conditions, the resulting models are notoriously difficult to interpret. This limits the utility of deep networks for neuroscience, where interpretation is often prized over absolute prediction accuracy. In this talk I will review two approaches that can be used to maximize interpretability of artificial deep networks and other machine learning tools when applied to brain data. The first approach is to use deep networks as a source of features for regression-based modeling. The second is to use deep learning infrastructure to construct sophisticated computational models of brain data. Both these approaches provide a means to produce high-dimensional quantitative models of brain data recorded under complex naturalistic conditions, while maximizing interpretability.




Other Videos By Simons Institute for the Theory of Computing


2022-07-11Genomics of Cancer
2022-07-11Formatting Biological Big Data to Enable (Personalized) Systems Pharmacology
2022-07-11Landscapes of Human cis-regulatory Elements and Transcription Factor Binding Sites...
2022-07-11Spatial Transcriptomics Identifies Neighbourhoods and Molecular Markers of Alveolar Damage...
2022-07-11BANKSY: A Spatial Omics Algorithm that Unifies Cell Type Clustering and Tissue Domain Segmentation
2022-07-01Panel on Interpretability in the Law
2022-07-01Platform-supported Auditing Of Social Media Algorithms For Public Interest
2022-07-01Legal Barriers To Interpretable Machine Learning
2022-06-30Interpretability and Algorithmic Fairness
2022-06-30Panel on Interpretability in the Biological Sciences
2022-06-30Machine Learning, Deep Networks and Interpretability in Systems, Cognitive and...
2022-06-30Interpreting Deep Learning Models Of Functional Genomics Data To Decode Regulatory Sequence...
2022-06-30Panel on Interpretability in the Physical Sciences
2022-06-30Interpreting Machine Learning From the Perspective of Nonequilibrium Systems
2022-06-29Interpretability In Atomic-Scale Machine Learning
2022-06-29Panel on Causality
2022-06-29Explanation: A(N Abridged) Survey
2022-06-29Conceptual Challenges In Connecting Interpretability And Causality
2022-06-29Panel on Societal Dimensions of Explanation
2022-06-29Accuracy and Interpretability Through the Lens of Human-AI Teaming
2022-06-28Goals And Interpretable Variables In Neuroscience



Tags:
Simons Institute
theoretical computer science
UC Berkeley
Computer Science
Theory of Computation
Theory of Computing
Interpretable Machine Learning in Natural and Social Sciences
Jack Gallant