Confidence Calibration in Vision-Language-Action Models
By: Thomas P Zollo, Richard Zemel
Potential Business Impact:
Robots know when they'll succeed or fail.
Trustworthy robot behavior requires not only high levels of task success but also that the robot can reliably quantify how likely it is to succeed. To this end, we present the first systematic study of confidence calibration in vision-language-action (VLA) foundation models, which map visual observations and natural-language instructions to low-level robot motor commands. We begin with extensive benchmarking to understand the critical relationship between task success and calibration error across multiple datasets and VLA variants, finding that task performance and calibration are not in tension. Next, we introduce prompt ensembles for VLAs, a lightweight, Bayesian-inspired algorithm that averages confidence across paraphrased instructions and consistently improves calibration. We further analyze calibration over the task time horizon, showing that confidence is often most reliable after making some progress, suggesting natural points for risk-aware intervention. Finally, we reveal differential miscalibration across action dimensions and propose action-wise Platt scaling, a method to recalibrate each action dimension independently to produce better confidence estimates. Our aim in this study is to begin to develop the tools and conceptual understanding necessary to render VLAs both highly performant and highly trustworthy via reliable uncertainty quantification.
Similar Papers
Evaluating Uncertainty and Quality of Visual Language Action-enabled Robots
Software Engineering
Helps robots know how well they did a job.
Evaluating Uncertainty and Quality of Visual Language Action-enabled Robots
Software Engineering
Helps robots know how well they did tasks.
Experiences from Benchmarking Vision-Language-Action Models for Robotic Manipulation
Robotics
Robots learn to do tasks better by watching and listening.