Score: 2

From Minutes to Days: Scaling Intracranial Speech Decoding with Supervised Pretraining

Published: December 17, 2025 | arXiv ID: 2512.15830v1

By: Linnea Evanson , Mingfang , Zhang and more

BigTech Affiliations: Meta

Potential Business Impact:

Lets computers understand speech from brain signals.

Business Areas:
Speech Recognition Data and Analytics, Software

Decoding speech from brain activity has typically relied on limited neural recordings collected during short and highly controlled experiments. Here, we introduce a framework to leverage week-long intracranial and audio recordings from patients undergoing clinical monitoring, effectively increasing the training dataset size by over two orders of magnitude. With this pretraining, our contrastive learning model substantially outperforms models trained solely on classic experimental data, with gains that scale log-linearly with dataset size. Analysis of the learned representations reveals that, while brain activity represents speech features, its global structure largely drifts across days, highlighting the need for models that explicitly account for cross-day variability. Overall, our approach opens a scalable path toward decoding and modeling brain representations in both real-life and controlled task settings.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Sound