A Convolutional Framework for Mapping Imagined Auditory MEG into Listened Brain Responses
By: Maryam Maghsoudi, Mohsen Rezaeizadeh, Shihab Shamma
Potential Business Impact:
Reads minds to control music and speech.
Decoding imagined speech engages complex neural processes that are difficult to interpret due to uncertainty in timing and the limited availability of imagined-response datasets. In this study, we present a Magnetoencephalography (MEG) dataset collected from trained musicians as they imagined and listened to musical and poetic stimuli. We show that both imagined and perceived brain responses contain consistent, condition-specific information. Using a sliding-window ridge regression model, we first mapped imagined responses to listened responses at the single-subject level, but found limited generalization across subjects. At the group level, we developed an encoder-decoder convolutional neural network with a subject-specific calibration layer that produced stable and generalizable mappings. The CNN consistently outperformed the null model, yielding significantly higher correlations between predicted and true listened responses for nearly all held-out subjects. Our findings demonstrate that imagined neural activity can be transformed into perception-like responses, providing a foundation for future brain-computer interface applications involving imagined speech and music.
Similar Papers
Estimating Brain Activity with High Spatial and Temporal Resolution using a Naturalistic MEG-fMRI Encoding Model
Neurons and Cognition
Maps brain activity with amazing detail.
Neural Decoding of Overt Speech from ECoG Using Vision Transformers and Contrastive Representation Learning
Artificial Intelligence
Lets paralyzed people talk by reading brain signals.
Cueless EEG imagined speech for subject identification: dataset and benchmarks
Machine Learning (CS)
Reads minds to unlock phones or computers.