Seeing Through the Brain: New Insights from Decoding Visual Stimuli with fMRI
By: Zheng Huang , Enpei Zhang , Yinghao Cai and more
Potential Business Impact:
Lets computers see what you see from brain scans.
Understanding how the brain encodes visual information is a central challenge in neuroscience and machine learning. A promising approach is to reconstruct visual stimuli, essentially images, from functional Magnetic Resonance Imaging (fMRI) signals. This involves two stages: transforming fMRI signals into a latent space and then using a pretrained generative model to reconstruct images. The reconstruction quality depends on how similar the latent space is to the structure of neural activity and how well the generative model produces images from that space. Yet, it remains unclear which type of latent space best supports this transformation and how it should be organized to represent visual stimuli effectively. We present two key findings. First, fMRI signals are more similar to the text space of a language model than to either a vision based space or a joint text image space. Second, text representations and the generative model should be adapted to capture the compositional nature of visual stimuli, including objects, their detailed attributes, and relationships. Building on these insights, we propose PRISM, a model that Projects fMRI sIgnals into a Structured text space as an interMediate representation for visual stimuli reconstruction. It includes an object centric diffusion module that generates images by composing individual objects to reduce object detection errors, and an attribute relationship search module that automatically identifies key attributes and relationships that best align with the neural activity. Extensive experiments on real world datasets demonstrate that our framework outperforms existing methods, achieving up to an 8% reduction in perceptual loss. These results highlight the importance of using structured text as the intermediate space to bridge fMRI signals and image reconstruction.
Similar Papers
BrainExplore: Large-Scale Discovery of Interpretable Visual Representations in the Human Brain
CV and Pattern Recognition
Finds what brain parts see specific things.
Brain-IT: Image Reconstruction from fMRI via Brain-Interaction Transformer
CV and Pattern Recognition
Shows pictures people see from brain scans.
Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction
CV and Pattern Recognition
Lets computers watch videos from brain scans.