Score: 0

Deep Learning Architectures for Code-Modulated Visual Evoked Potentials Detection

Published: November 26, 2025 | arXiv ID: 2511.21940v1

By: Kiran Nair, Hubert Cecotti

Potential Business Impact:

Lets minds control computers with brain signals.

Business Areas:
Image Recognition Data and Analytics, Software

Non-invasive Brain-Computer Interfaces (BCIs) based on Code-Modulated Visual Evoked Potentials (C-VEPs) require highly robust decoding methods to address temporal variability and session-dependent noise in EEG signals. This study proposes and evaluates several deep learning architectures, including convolutional neural networks (CNNs) for 63-bit m-sequence reconstruction and classification, and Siamese networks for similarity-based decoding, alongside canonical correlation analysis (CCA) baselines. EEG data were recorded from 13 healthy adults under single-target flicker stimulation. The proposed deep models significantly outperformed traditional approaches, with distance-based decoding using Earth Mover's Distance (EMD) and constrained EMD showing greater robustness to latency variations than Euclidean and Mahalanobis metrics. Temporal data augmentation with small shifts further improved generalization across sessions. Among all models, the multi-class Siamese network achieved the best overall performance with an average accuracy of 96.89%, demonstrating the potential of data-driven deep architectures for reliable, single-trial C-VEP decoding in adaptive non-invasive BCI systems.

Country of Origin
🇺🇸 United States

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)