ReEXplore: Improving MLLMs for Embodied Exploration with Contextualized Retrospective Experience Replay
By: Gengyuan Zhang , Mingcong Ding , Jingpei Wu and more
Potential Business Impact:
Helps robots learn to explore new places faster.
Embodied exploration is a target-driven process that requires embodied agents to possess fine-grained perception and knowledge-enhanced decision making. While recent attempts leverage MLLMs for exploration due to their strong perceptual and reasoning abilities, we find that MLLM-based embodied agents remain suboptimal in exploring new environments: (i) they rely on profound but stale pre-trained knowledge, (ii) training-based approaches such as imitation learning or reinforcement learning are expensive for long-horizon tasks with sparse outcome rewards, and (iii) frontier-based exploration yields a large, visually nuanced action space that is difficult for MLLMs to make reliable decisions. We address these challenges with ReEXplore, a training-free framework that performs retrospective experience replay to inject distilled, abstract experience at inference time, and hierarchical frontier selection to decompose frontier ranking into coarse-to-fine decisions. Our approach enables robust, traceable, and efficient exploration. Across multiple embodied exploration benchmarks, ReEXplore yields great improvements over strong MLLM baselines, up to 3x higher performance in both success rate and in navigation efficiency under open-source backbones.
Similar Papers
Improving RL Exploration for LLM Reasoning through Retrospective Replay
Machine Learning (CS)
Helps AI learn better by remembering good ideas.
Benchmarking In-context Experiential Learning Through Repeated Product Recommendations
Machine Learning (CS)
Teaches AI to learn from mistakes in real-time.
Vision to Geometry: 3D Spatial Memory for Sequential Embodied MLLM Reasoning and Exploration
CV and Pattern Recognition
Helps robots learn and remember tasks in new places.