MemER: Scaling Up Memory for Robot Control via Experience Retrieval
By: Ajay Sridhar , Jennifer Pan , Satvik Sharma and more
Potential Business Impact:
Robots remember past actions to do tasks.
Humans routinely rely on memory to perform tasks, yet most robot policies lack this capability; our goal is to endow robot policies with the same ability. Naively conditioning on long observation histories is computationally expensive and brittle under covariate shift, while indiscriminate subsampling of history leads to irrelevant or redundant information. We propose a hierarchical policy framework, where the high-level policy is trained to select and track previous relevant keyframes from its experience. The high-level policy uses selected keyframes and the most recent frames when generating text instructions for a low-level policy to execute. This design is compatible with existing vision-language-action (VLA) models and enables the system to efficiently reason over long-horizon dependencies. In our experiments, we finetune Qwen2.5-VL-7B-Instruct and $\pi_{0.5}$ as the high-level and low-level policies respectively, using demonstrations supplemented with minimal language annotations. Our approach, MemER, outperforms prior methods on three real-world long-horizon robotic manipulation tasks that require minutes of memory. Videos and code can be found at https://jen-pan.github.io/memer/.
Similar Papers
Memo: Training Memory-Efficient Embodied Agents with Reinforcement Learning
Artificial Intelligence
Helps robots remember and learn from past experiences.
VideoMem: Enhancing Ultra-Long Video Understanding via Adaptive Memory Management
CV and Pattern Recognition
Lets computers watch and remember long videos.
FindingDory: A Benchmark to Evaluate Memory in Embodied Agents
CV and Pattern Recognition
Helps robots remember and act over time.