ESPADA: Execution Speedup via Semantics Aware Demonstration Data Downsampling for Imitation Learning
By: Byungju Kim , Jinu Pahk , Chungwoo Lee and more
Potential Business Impact:
Makes robots move faster without mistakes.
Behavior-cloning based visuomotor policies enable precise manipulation but often inherit the slow, cautious tempo of human demonstrations, limiting practical deployment. However, prior studies on acceleration methods mainly rely on statistical or heuristic cues that ignore task semantics and can fail across diverse manipulation settings. We present ESPADA, a semantic and spatially aware framework that segments demonstrations using a VLM-LLM pipeline with 3D gripper-object relations, enabling aggressive downsampling only in non-critical segments while preserving precision-critical phases, without requiring extra data or architectural modifications, or any form of retraining. To scale from a single annotated episode to the full dataset, ESPADA propagates segment labels via Dynamic Time Warping (DTW) on dynamics-only features. Across both simulation and real-world experiments with ACT and DP baselines, ESPADA achieves approximately a 2x speed-up while maintaining success rates, narrowing the gap between human demonstrations and efficient robot control.
Similar Papers
SPIDER: Scalable Physics-Informed Dexterous Retargeting
Robotics
Teaches robots to move like humans using human videos.
SemanticVLA: Semantic-Aligned Sparsification and Enhancement for Efficient Robotic Manipulation
CV and Pattern Recognition
Helps robots understand and do tasks better.
Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach
Robotics
Makes robots learn and do tasks better.