Decoding the Multimodal Mind: Generalizable Brain-to-Text Translation via Multimodal Alignment and Adaptive Routing
By: Chunyu Ye , Yunhao Zhang , Jingyuan Sun and more
Potential Business Impact:
Reads thoughts about pictures, sounds, and words.
Decoding language from the human brain remains a grand challenge for Brain-Computer Interfaces (BCIs). Current approaches typically rely on unimodal brain representations, neglecting the brain's inherently multimodal processing. Inspired by the brain's associative mechanisms, where viewing an image can evoke related sounds and linguistic representations, we propose a unified framework that leverages Multimodal Large Language Models (MLLMs) to align brain signals with a shared semantic space encompassing text, images, and audio. A router module dynamically selects and fuses modality-specific brain features according to the characteristics of each stimulus. Experiments on various fMRI datasets with textual, visual, and auditory stimuli demonstrate state-of-the-art performance, achieving an 8.48% improvement on the most commonly used benchmark. We further extend our framework to EEG and MEG data, demonstrating flexibility and robustness across varying temporal and spatial resolutions. To our knowledge, this is the first unified BCI architecture capable of robustly decoding multimodal brain activity across diverse brain signals and stimulus types, offering a flexible solution for real-world applications.
Similar Papers
A Pre-trained Framework for Multilingual Brain Decoding Using Non-invasive Recordings
Neurons and Cognition
Lets brains talk in any language.
Unified Multimodal Brain Decoding via Cross-Subject Soft-ROI Fusion
Machine Learning (CS)
Reads minds to describe what you see.
Brain-Adapter: Enhancing Neurological Disorder Analysis with Adapter-Tuning Multimodal Large Language Models
Image and Video Processing
Helps doctors find brain problems using scans and words.