Score: 0

When to Act: Calibrated Confidence for Reliable Human Intention Prediction in Assistive Robotics

Published: January 8, 2026 | arXiv ID: 2601.04982v1

By: Johannes A. Gaus, Winfried Ilg, Daniel Haeufle

Potential Business Impact:

Helps robots know when to help you safely.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Assistive devices must determine both what a user intends to do and how reliable that prediction is before providing support. We introduce a safety-critical triggering framework based on calibrated probabilities for multimodal next-action prediction in Activities of Daily Living. Raw model confidence often fails to reflect true correctness, posing a safety risk. Post-hoc calibration aligns predicted confidence with empirical reliability and reduces miscalibration by about an order of magnitude without affecting accuracy. The calibrated confidence drives a simple ACT/HOLD rule that acts only when reliability is high and withholds assistance otherwise. This turns the confidence threshold into a quantitative safety parameter for assisted actions and enables verifiable behavior in an assistive control loop.

Page Count
6 pages

Category
Computer Science:
Robotics