Assessing Policy Updates: Toward Trust-Preserving Intelligent User Interfaces
By: Matan Solomon, Ofra Amir, Omer Ben-Porat
Potential Business Impact:
Shows if computer learning improved or worsened.
Reinforcement learning agents are often updated with human feedback, yet such updates can be unreliable: reward misspecification, preference conflicts, or limited data may leave policies unchanged or even worse. Because policies are difficult to interpret directly, users face the challenge of deciding whether an update has truly helped. We propose that assessing model updates -- not just a single model -- is a critical design challenge for intelligent user interfaces. In a controlled study, participants provided feedback to an agent in a gridworld and then compared its original and updated policies. We evaluated four strategies for communicating updates: no demonstration, same-context, random-context, and salient-contrast demonstrations designed to highlight informative differences. Salient-contrast demonstrations significantly improved participants' ability to detect when updates helped or harmed performance, mitigating participants' bias towards assuming that feedback is always beneficial, and supported better trust calibration across contexts.
Similar Papers
Integrating Human Feedback into a Reinforcement Learning-Based Framework for Adaptive User Interfaces
Human-Computer Interaction
Makes apps learn how you like them.
Adaptive Human-Computer Interaction Strategies Through Reinforcement Learning in Complex
Human-Computer Interaction
Makes computers learn to work better with people.
Learning Steerable Clarification Policies with Collaborative Self-play
Machine Learning (CS)
AI learns to ask questions when unsure.