Score: 1

CroSTAta: Cross-State Transition Attention Transformer for Robotic Manipulation

Published: October 1, 2025 | arXiv ID: 2510.00726v1

By: Giovanni Minelli , Giulio Turrisi , Victor Barasuol and more

Potential Business Impact:

Teaches robots to learn from mistakes.

Business Areas:
Autonomous Vehicles Transportation

Learning robotic manipulation policies through supervised learning from demonstrations remains challenging when policies encounter execution variations not explicitly covered during training. While incorporating historical context through attention mechanisms can improve robustness, standard approaches process all past states in a sequence without explicitly modeling the temporal structure that demonstrations may include, such as failure and recovery patterns. We propose a Cross-State Transition Attention Transformer that employs a novel State Transition Attention (STA) mechanism to modulate standard attention weights based on learned state evolution patterns, enabling policies to better adapt their behavior based on execution history. Our approach combines this structured attention with temporal masking during training, where visual information is randomly removed from recent timesteps to encourage temporal reasoning from historical context. Evaluation in simulation shows that STA consistently outperforms standard cross-attention and temporal modeling approaches like TCN and LSTM networks across all tasks, achieving more than 2x improvement over cross-attention on precision-critical tasks.

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Robotics