Goals And Interpretable Variables In Neuroscience

Published on ● Video Link: https://www.youtube.com/watch?v=-mewLr47k34



Duration: 45:30
698 views
11


David Danks (University of California, San Diego)
https://simons.berkeley.edu/talks/goals-and-interpretable-variables-neuroscience
Interpretable Machine Learning in Natural and Social Sciences

Modern cognitive neuroscience often requires us to identify causal "objects" (perhaps spatial aggregates, perhaps more complex dynamic objects) that can function in our neuroscientific theories. Moreover, we often hope or require that these "objects" be neuroscientifically understandable (or plausible). Of course, the brain does not come neatly segmented or packaged into appropriate aggregates or objects; rather, these objects are themselves the product of scientific work, and which objects we get depend on the goals that we have. I will argue that different goals map onto different learning criteria, which then map onto different extant methods in cognitive neuroscience. The philosophical and technical challenge is that these different methods can yield incompatible outputs, particularly if we require interpretability, and so we seem to be led towards a problematic pluralism. I will conclude by considering several ways to try to avoid problematic inconsistencies and conflicts between our theories.







Tags:
Simons Institute
theoretical computer science
UC Berkeley
Computer Science
Theory of Computation
Theory of Computing
Interpretable Machine Learning in Natural and Social Sciences
David Danks