A Reinforcement Learning-Based Model for Mapping and Goal-Directed Navigation Using Multiscale Place Fields
By: Bekarys Dukenbaev , Andrew Gerstenslager , Alexander Johnson and more
Potential Business Impact:
Helps robots learn to find their way faster.
Autonomous navigation in complex and partially observable environments remains a central challenge in robotics. Several bio-inspired models of mapping and navigation based on place cells in the mammalian hippocampus have been proposed. This paper introduces a new robust model that employs parallel layers of place fields at multiple spatial scales, a replay-based reward mechanism, and dynamic scale fusion. Simulations show that the model improves path efficiency and accelerates learning compared to single-scale baselines, highlighting the value of multiscale spatial representations for adaptive robot navigation.
Similar Papers
Place Cells as Proximity-Preserving Embeddings: From Multi-Scale Random Walk to Straight-Forward Path Planning
Neurons and Cognition
Helps robots learn to find their way around.
Online Hierarchical Policy Learning using Physics Priors for Robot Navigation in Unknown Environments
Robotics
Helps robots explore and navigate big, unknown places.
Mimicking associative learning of rats via a neuromorphic robot in open field maze using spatial cell models
Robotics
Robots learn like animals to explore new places.