Score: 0

High-resolution spatial memory requires grid-cell-like neural codes

Published: July 1, 2025 | arXiv ID: 2507.00598v1

By: Madison Cotteret , Christopher J. Kymn , Hugh Greatorex and more

Potential Business Impact:

Brain memory stays strong, remembers details better.

Business Areas:
Neuroscience Biotechnology, Science and Engineering

Continuous attractor networks (CANs) are widely used to model how the brain temporarily retains continuous behavioural variables via persistent recurrent activity, such as an animal's position in an environment. However, this memory mechanism is very sensitive to even small imperfections, such as noise or heterogeneity, which are both common in biological systems. Previous work has shown that discretising the continuum into a finite set of discrete attractor states provides robustness to these imperfections, but necessarily reduces the resolution of the represented variable, creating a dilemma between stability and resolution. We show that this stability-resolution dilemma is most severe for CANs using unimodal bump-like codes, as in traditional models. To overcome this, we investigate sparse binary distributed codes based on random feature embeddings, in which neurons have spatially-periodic receptive fields. We demonstrate theoretically and with simulations that such grid-cell-like codes enable CANs to achieve both high stability and high resolution simultaneously. The model extends to embedding arbitrary nonlinear manifolds into a CAN, such as spheres or tori, and generalises linear path integration to integration along freely-programmable on-manifold vector fields. Together, this work provides a theory of how the brain could robustly represent continuous variables with high resolution and perform flexible computations over task-relevant manifolds.

Country of Origin
🇳🇱 Netherlands

Page Count
25 pages

Category
Computer Science:
Neural and Evolutionary Computing