MOVE: A Simple Motion-Based Data Collection Paradigm for Spatial Generalization in Robotic Manipulation
By: Huanqian Wang , Chi Bene Chen , Yang Yue and more
Potential Business Impact:
Robots learn to grab things better with moving objects.
Imitation learning method has shown immense promise for robotic manipulation, yet its practical deployment is fundamentally constrained by the data scarcity. Despite prior work on collecting large-scale datasets, there still remains a significant gap to robust spatial generalization. We identify a key limitation: individual trajectories, regardless of their length, are typically collected from a \emph{single, static spatial configuration} of the environment. This includes fixed object and target spatial positions as well as unchanging camera viewpoints, which significantly restricts the diversity of spatial information available for learning. To address this critical bottleneck in data efficiency, we propose \textbf{MOtion-Based Variability Enhancement} (\emph{MOVE}), a simple yet effective data collection paradigm that enables the acquisition of richer spatial information from dynamic demonstrations. Our core contribution is an augmentation strategy that injects motion into any movable objects within the environment for each demonstration. This process implicitly generates a dense and diverse set of spatial configurations within a single trajectory. We conduct extensive experiments in both simulation and real-world environments to validate our approach. For example, in simulation tasks requiring strong spatial generalization, \emph{MOVE} achieves an average success rate of 39.1\%, a 76.1\% relative improvement over the static data collection paradigm (22.2\%), and yields up to 2--5$\times$ gains in data efficiency on certain tasks. Our code is available at https://github.com/lucywang720/MOVE.
Similar Papers
A Study on Enhancing the Generalization Ability of Visuomotor Policies via Data Augmentation
Robotics
Teaches robots to do tasks in new places.
Train Once, Deploy Anywhere: Realize Data-Efficient Dynamic Object Manipulation
Robotics
Robots learn to grab many things with few examples.
The Quest for Generalizable Motion Generation: Data, Model, and Evaluation
CV and Pattern Recognition
Makes computer-made people move more realistically.