Brain-Gen: Towards Interpreting Neural Signals for Stimulus Reconstruction Using Transformers and Latent Diffusion Models
By: Hasib Aslam, Muhammad Talal Faiz, Muhammad Imran Malik
Advances in neuroscience and artificial intelligence have enabled preliminary decoding of brain activity. However, despite the progress, the interpretability of neural representations remains limited. A significant challenge arises from the intrinsic properties of electroencephalography (EEG) signals, including high noise levels, spatial diffusion, and pronounced temporal variability. To interpret the neural mechanism underlying thoughts, we propose a transformers-based framework to extract spatial-temporal representations associated with observed visual stimuli from EEG recordings. These features are subsequently incorporated into the attention mechanisms of Latent Diffusion Models (LDMs) to facilitate the reconstruction of visual stimuli from brain activity. The quantitative evaluations on publicly available benchmark datasets demonstrate that the proposed method excels at modeling the semantic structures from EEG signals; achieving up to 6.5% increase in latent space clustering accuracy and 11.8% increase in zero shot generalization across unseen classes while having comparable Inception Score and Fréchet Inception Distance with existing baselines. Our work marks a significant step towards generalizable semantic interpretation of the EEG signals.
Similar Papers
Interpretable EEG-to-Image Generation with Semantic Prompts
CV and Pattern Recognition
Lets computers guess what you see from brain waves.
EEGDM: Learning EEG Representation with Latent Diffusion Model
Machine Learning (CS)
Helps computers understand brain signals better.
EEGDM: EEG Representation Learning via Generative Diffusion Model
Machine Learning (CS)
Helps doctors understand brain signals better.