Score: 2

MindCine: Multimodal EEG-to-Video Reconstruction with Large-Scale Pretrained Models

Published: January 26, 2026 | arXiv ID: 2601.18192v1

By: Tian-Yi Zhou , Xuan-Hao Liu , Bao-Liang Lu and more

Potential Business Impact:

Shows what you're seeing from brain waves.

Business Areas:
Motion Capture Media and Entertainment, Video

Reconstructing human dynamic visual perception from electroencephalography (EEG) signals is of great research significance since EEG's non-invasiveness and high temporal resolution. However, EEG-to-video reconstruction remains challenging due to: 1) Single Modality: existing studies solely align EEG signals with the text modality, which ignores other modalities and are prone to suffer from overfitting problems; 2) Data Scarcity: current methods often have difficulty training to converge with limited EEG-video data. To solve the above problems, we propose a novel framework MindCine to achieve high-fidelity video reconstructions on limited data. We employ a multimodal joint learning strategy to incorporate beyond-text modalities in the training stage and leverage a pre-trained large EEG model to relieve the data scarcity issue for decoding semantic information, while a Seq2Seq model with causal attention is specifically designed for decoding perceptual information. Extensive experiments demonstrate that our model outperforms state-of-the-art methods both qualitatively and quantitatively. Additionally, the results underscore the effectiveness of the complementary strengths of different modalities and demonstrate that leveraging a large-scale EEG model can further enhance reconstruction performance by alleviating the challenges associated with limited data.

Country of Origin
πŸ‡¨πŸ‡³ China

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition