Instruction-Tuned Video-Audio Models Elucidate Functional Specialization in the Brain
By: Subba Reddy Oota , Khushbu Pahwa , Prachi Jindal and more
Potential Business Impact:
Makes AI understand movies like people do.
Recent voxel-wise multimodal brain encoding studies have shown that multimodal large language models (MLLMs) exhibit a higher degree of brain alignment compared to unimodal models in both unimodal and multimodal stimulus settings. More recently, instruction-tuned multimodal models have shown to generate task-specific representations that align strongly with brain activity. However, prior work evaluating the brain alignment of MLLMs has primarily focused on unimodal settings or relied on non-instruction-tuned multimodal models for multimodal stimuli. To address this gap, we investigated brain alignment, that is, measuring the degree of predictivity of neural activity recorded while participants were watching naturalistic movies (video along with audio) with representations derived from MLLMs. We utilized instruction-specific embeddings from six video and two audio instruction-tuned MLLMs. Experiments with 13 video task-specific instructions show that instruction-tuned video MLLMs significantly outperform non-instruction-tuned multimodal (by 15%) and unimodal models (by 20%). Our evaluation of MLLMs for both video and audio tasks using language-guided instructions shows clear disentanglement in task-specific representations from MLLMs, leading to precise differentiation of multimodal functional processing in the brain. We also find that MLLM layers align hierarchically with the brain, with early sensory areas showing strong alignment with early layers, while higher-level visual and language regions align more with middle to late layers. These findings provide clear evidence for the role of task-specific instructions in improving the alignment between brain activity and MLLMs, and open new avenues for mapping joint information processing in both the systems. We make the code publicly available [https://github.com/subbareddy248/mllm_videos].
Similar Papers
Correlating instruction-tuning (in multimodal models) with vision-language processing (in the brain)
Neurons and Cognition
Computers understand what you see like brains.
Multi-modal brain encoding models for multi-modal stimuli
Neurons and Cognition
Helps understand how brains mix sight and sound.
Mind the Gap: Aligning the Brain with Language Models Requires a Nonlinear and Multimodal Approach
Computation and Language
Reads minds by matching sounds to brain signals.