Score: 0

TA-VLA: Elucidating the Design Space of Torque-aware Vision-Language-Action Models

Published: September 9, 2025 | arXiv ID: 2509.07962v1

By: Zongzheng Zhang , Haobo Xu , Zhuo Yang and more

Potential Business Impact:

Robots feel and use touch to do jobs better.

Business Areas:
Autonomous Vehicles Transportation

Many robotic manipulation tasks require sensing and responding to force signals such as torque to assess whether the task has been successfully completed and to enable closed-loop control. However, current Vision-Language-Action (VLA) models lack the ability to integrate such subtle physical feedback. In this work, we explore Torque-aware VLA models, aiming to bridge this gap by systematically studying the design space for incorporating torque signals into existing VLA architectures. We identify and evaluate several strategies, leading to three key findings. First, introducing torque adapters into the decoder consistently outperforms inserting them into the encoder.Third, inspired by joint prediction and planning paradigms in autonomous driving, we propose predicting torque as an auxiliary output, which further improves performance. This strategy encourages the model to build a physically grounded internal representation of interaction dynamics. Extensive quantitative and qualitative experiments across contact-rich manipulation benchmarks validate our findings.

Country of Origin
🇨🇳 China

Page Count
19 pages

Category
Computer Science:
Robotics