Language models align with brain regions that represent concepts across modalities
By: Maria Ryskina , Greta Tuckute , Alexander Fung and more
Potential Business Impact:
Computers understand ideas, not just words.
Cognitive science and neuroscience have long faced the challenge of disentangling representations of language from representations of conceptual meaning. As the same problem arises in today's language models (LMs), we investigate the relationship between LM--brain alignment and two neural metrics: (1) the level of brain activation during processing of sentences, targeting linguistic processing, and (2) a novel measure of meaning consistency across input modalities, which quantifies how consistently a brain region responds to the same concept across paradigms (sentence, word cloud, image) using an fMRI dataset (Pereira et al., 2018). Our experiments show that both language-only and language-vision models predict the signal better in more meaning-consistent areas of the brain, even when these areas are not strongly sensitive to language processing, suggesting that LMs might internally represent cross-modal conceptual meaning.
Similar Papers
fMRI-LM: Towards a Universal Foundation Model for Language-Aligned fMRI Understanding
Computation and Language
Reads thoughts from brain scans using language.
Scaling and context steer LLMs along the same computational path as the human brain
Machine Learning (CS)
Brain and AI process information in a similar order.
Experiential Semantic Information and Brain Alignment: Are Multimodal Models Better than Language Models?
Computation and Language
Computers understand words better without pictures.