Score: 0

Simple Models, Rich Representations: Visual Decoding from Primate Intracortical Neural Signals

Published: January 16, 2026 | arXiv ID: 2601.11108v1

By: Matteo Ciferri, Matteo Ferrante, Nicola Toschi

Potential Business Impact:

Reads minds to create pictures from brain waves.

Business Areas:
Image Recognition Data and Analytics, Software

Understanding how neural activity gives rise to perception is a central challenge in neuroscience. We address the problem of decoding visual information from high-density intracortical recordings in primates, using the THINGS Ventral Stream Spiking Dataset. We systematically evaluate the effects of model architecture, training objectives, and data scaling on decoding performance. Results show that decoding accuracy is mainly driven by modeling temporal dynamics in neural signals, rather than architectural complexity. A simple model combining temporal attention with a shallow MLP achieves up to 70% top-1 image retrieval accuracy, outperforming linear baselines as well as recurrent and convolutional approaches. Scaling analyses reveal predictable diminishing returns with increasing input dimensionality and dataset size. Building on these findings, we design a modular generative decoding pipeline that combines low-resolution latent reconstruction with semantically conditioned diffusion, generating plausible images from 200 ms of brain activity. This framework provides principles for brain-computer interfaces and semantic neural decoding.

Country of Origin
🇮🇹 Italy

Page Count
15 pages

Category
Quantitative Biology:
Neurons and Cognition