Inferring trust in recommendation systems from brain, behavioural, and physiological data
By: Vincent K. M. Cheung , Pei-Cheng Shih , Masato Hirano and more
Potential Business Impact:
Helps AI learn how much people trust it.
As people nowadays increasingly rely on artificial intelligence (AI) to curate information and make decisions, assigning the appropriate amount of trust in automated intelligent systems has become ever more important. However, current measurements of trust in automation still largely rely on self-reports that are subjective and disruptive to the user. Here, we take music recommendation as a model to investigate the neural and cognitive processes underlying trust in automation. We observed that system accuracy was directly related to users' trust and modulated the influence of recommendation cues on music preference. Modelling users' reward encoding process with a reinforcement learning model further revealed that system accuracy, expected reward, and prediction error were related to oscillatory neural activity recorded via EEG and changes in pupil diameter. Our results provide a neurally grounded account of calibrating trust in automation and highlight the promises of a multimodal approach towards developing trustable AI systems.
Similar Papers
Trust in AI emerges from distrust in humans: A machine learning study on decision-making guidance
Human-Computer Interaction
People trust computers more when they don't trust people.
Six Guidelines for Trustworthy, Ethical and Responsible Automation Design
Human-Computer Interaction
Helps people trust computers correctly.
Understanding Human-AI Trust in Education
Computers and Society
Helps students trust AI tutors correctly.