Score: 1

EEG-Driven Image Reconstruction with Saliency-Guided Diffusion Models

Published: October 30, 2025 | arXiv ID: 2510.26391v1

By: Igor Abramov, Ilya Makarov

Potential Business Impact:

Shows what you're thinking by drawing pictures.

Business Areas:
Image Recognition Data and Analytics, Software

Existing EEG-driven image reconstruction methods often overlook spatial attention mechanisms, limiting fidelity and semantic coherence. To address this, we propose a dual-conditioning framework that combines EEG embeddings with spatial saliency maps to enhance image generation. Our approach leverages the Adaptive Thinking Mapper (ATM) for EEG feature extraction and fine-tunes Stable Diffusion 2.1 via Low-Rank Adaptation (LoRA) to align neural signals with visual semantics, while a ControlNet branch conditions generation on saliency maps for spatial control. Evaluated on THINGS-EEG, our method achieves a significant improvement in the quality of low- and high-level image features over existing approaches. Simultaneously, strongly aligning with human visual attention. The results demonstrate that attentional priors resolve EEG ambiguities, enabling high-fidelity reconstructions with applications in medical diagnostics and neuroadaptive interfaces, advancing neural decoding through efficient adaptation of pre-trained diffusion models.

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition