Simple Models, Rich Representations: Visual Decoding from Primate Intracortical Neural Signals
By: Matteo Ciferri, Matteo Ferrante, Nicola Toschi
Potential Business Impact:
Reads minds to create pictures from brain waves.
Understanding how neural activity gives rise to perception is a central challenge in neuroscience. We address the problem of decoding visual information from high-density intracortical recordings in primates, using the THINGS Ventral Stream Spiking Dataset. We systematically evaluate the effects of model architecture, training objectives, and data scaling on decoding performance. Results show that decoding accuracy is mainly driven by modeling temporal dynamics in neural signals, rather than architectural complexity. A simple model combining temporal attention with a shallow MLP achieves up to 70% top-1 image retrieval accuracy, outperforming linear baselines as well as recurrent and convolutional approaches. Scaling analyses reveal predictable diminishing returns with increasing input dimensionality and dataset size. Building on these findings, we design a modular generative decoding pipeline that combines low-resolution latent reconstruction with semantically conditioned diffusion, generating plausible images from 200 ms of brain activity. This framework provides principles for brain-computer interfaces and semantic neural decoding.
Similar Papers
Adaptive Decoding via Hierarchical Neural Information Gradients in Mouse Visual Tasks
Neurons and Cognition
Helps computers understand how brains see.
Decoding Predictive Inference in Visual Language Processing via Spatiotemporal Neural Coherence
Neurons and Cognition
Helps computers understand sign language from brain waves.
NeuroSketch: An Effective Framework for Neural Decoding via Systematic Architectural Optimization
Neurons and Cognition
Lets brains control computers better.