Score: 3

Correlating instruction-tuning (in multimodal models) with vision-language processing (in the brain)

Published: May 26, 2025 | arXiv ID: 2505.20029v1

By: Subba Reddy Oota , Akshett Jindal , Ishani Mondal and more

BigTech Affiliations: Microsoft

Potential Business Impact:

Computers understand what you see like brains.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Transformer-based language models, though not explicitly trained to mimic brain recordings, have demonstrated surprising alignment with brain activity. Progress in these models-through increased size, instruction-tuning, and multimodality-has led to better representational alignment with neural data. Recently, a new class of instruction-tuned multimodal LLMs (MLLMs) have emerged, showing remarkable zero-shot capabilities in open-ended multimodal vision tasks. However, it is unknown whether MLLMs, when prompted with natural instructions, lead to better brain alignment and effectively capture instruction-specific representations. To address this, we first investigate brain alignment, i.e., measuring the degree of predictivity of neural visual activity using text output response embeddings from MLLMs as participants engage in watching natural scenes. Experiments with 10 different instructions show that MLLMs exhibit significantly better brain alignment than vision-only models and perform comparably to non-instruction-tuned multimodal models like CLIP. We also find that while these MLLMs are effective at generating high-quality responses suitable to the task-specific instructions, not all instructions are relevant for brain alignment. Further, by varying instructions, we make the MLLMs encode instruction-specific visual concepts related to the input image. This analysis shows that MLLMs effectively capture count-related and recognition-related concepts, demonstrating strong alignment with brain activity. Notably, the majority of the explained variance of the brain encoding models is shared between MLLM embeddings of image captioning and other instructions. These results suggest that enhancing MLLMs' ability to capture task-specific information could lead to better differentiation between various types of instructions, and thereby improving their precision in predicting brain responses.

Country of Origin
πŸ‡©πŸ‡ͺ πŸ‡ΊπŸ‡Έ Germany, United States

Repos / Data Links

Page Count
30 pages

Category
Quantitative Biology:
Neurons and Cognition