Frame-shifting and Conceptual Blending: What do large language models have to say?
Subscribers:
68,700
Published on ● Video Link: https://www.youtube.com/watch?v=GzYpFbXYW5E
Seana Coulson (UC San Diego)
https://simons.berkeley.edu/talks/seana-coulson-uc-san-diego-2024-12-02
Unknown Futures of Generalization
Work in cognitive semantics suggests human language comprehension is remarkably flexible, relying heavily on hierarchically structured background knowledge and conceptual mapping ability. In this talk I will describe some evidence from my lab that suggests metrics from large language models do a good job of predicting behavioral and neural responses to some aspects of human language. I go on to describe research on joke comprehension that highlights important differences in meaning processing in humans and the ‘understanding’ displayed by language models trained only on text corpora.