NeuroSwift: A Lightweight Cross-Subject Framework for fMRI Visual Reconstruction of Complex Scenes
By: Shiyi Zhang, Dong Liang, Yihang Zhou
Potential Business Impact:
Shows what someone sees from their brain.
Reconstructing visual information from brain activity via computer vision technology provides an intuitive understanding of visual neural mechanisms. Despite progress in decoding fMRI data with generative models, achieving accurate cross-subject reconstruction of visual stimuli remains challenging and computationally demanding. This difficulty arises from inter-subject variability in neural representations and the brain's abstract encoding of core semantic features in complex visual inputs. To address these challenges, we propose NeuroSwift, which integrates complementary adapters via diffusion: AutoKL for low-level features and CLIP for semantics. NeuroSwift's CLIP Adapter is trained on Stable Diffusion generated images paired with COCO captions to emulate higher visual cortex encoding. For cross-subject generalization, we pretrain on one subject and then fine-tune only 17 percent of parameters (fully connected layers) for new subjects, while freezing other components. This enables state-of-the-art performance with only one hour of training per subject on lightweight GPUs (three RTX 4090), and it outperforms existing methods.
Similar Papers
A Cognitive Process-Inspired Architecture for Subject-Agnostic Brain Visual Decoding
CV and Pattern Recognition
Lets computers see what people see.
HAVIR: HierArchical Vision to Image Reconstruction using CLIP-Guided Versatile Diffusion
CV and Pattern Recognition
Lets brains see pictures from thoughts.
BrainMCLIP: Brain Image Decoding with Multi-Layer feature Fusion of CLIP
CV and Pattern Recognition
Reads minds to see detailed pictures.