Score: 0

Assessing Policy Updates: Toward Trust-Preserving Intelligent User Interfaces

Published: October 12, 2025 | arXiv ID: 2510.10616v1

By: Matan Solomon, Ofra Amir, Omer Ben-Porat

Potential Business Impact:

Shows if computer learning improved or worsened.

Business Areas:
Human Computer Interaction Design, Science and Engineering

Reinforcement learning agents are often updated with human feedback, yet such updates can be unreliable: reward misspecification, preference conflicts, or limited data may leave policies unchanged or even worse. Because policies are difficult to interpret directly, users face the challenge of deciding whether an update has truly helped. We propose that assessing model updates -- not just a single model -- is a critical design challenge for intelligent user interfaces. In a controlled study, participants provided feedback to an agent in a gridworld and then compared its original and updated policies. We evaluated four strategies for communicating updates: no demonstration, same-context, random-context, and salient-contrast demonstrations designed to highlight informative differences. Salient-contrast demonstrations significantly improved participants' ability to detect when updates helped or harmed performance, mitigating participants' bias towards assuming that feedback is always beneficial, and supported better trust calibration across contexts.

Country of Origin
🇮🇱 Israel

Page Count
16 pages

Category
Computer Science:
Human-Computer Interaction