Deep Neural Encoder-Decoder Model to Relate fMRI Brain Activity with Naturalistic Stimuli
By: Florian David , Michael Chan , Elenor Morgenroth and more
Potential Business Impact:
Reconstructs movies from brain scans.
We propose an end-to-end deep neural encoder-decoder model to encode and decode brain activity in response to naturalistic stimuli using functional magnetic resonance imaging (fMRI) data. Leveraging temporally correlated input from consecutive film frames, we employ temporal convolutional layers in our architecture, which effectively allows to bridge the temporal resolution gap between natural movie stimuli and fMRI acquisitions. Our model predicts activity of voxels in and around the visual cortex and performs reconstruction of corresponding visual inputs from neural activity. Finally, we investigate brain regions contributing to visual decoding through saliency maps. We find that the most contributing regions are the middle occipital area, the fusiform area, and the calcarine, respectively employed in shape perception, complex recognition (in particular face perception), and basic visual features such as edges and contrasts. These functions being strongly solicited are in line with the decoder's capability to reconstruct edges, faces, and contrasts. All in all, this suggests the possibility to probe our understanding of visual processing in films using as a proxy the behaviour of deep learning models such as the one proposed in this paper.
Similar Papers
A Survey on fMRI-based Brain Decoding for Reconstructing Multimodal Stimuli
CV and Pattern Recognition
Lets computers see what you see from brain scans.
Hi-DREAM: Brain Inspired Hierarchical Diffusion for fMRI Reconstruction via ROI Encoder and visuAl Mapping
CV and Pattern Recognition
Reconstructs images from brain scans better.
Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction
CV and Pattern Recognition
Lets computers watch videos from brain scans.