Understanding Trust Toward Human versus AI-generated Health Information through Behavioral and Physiological Sensing
By: Xin Sun , Rongjun Ma , Shu Wei and more
As AI-generated health information proliferates online and becomes increasingly indistinguishable from human-sourced information, it becomes critical to understand how people trust and label such content, especially when the information is inaccurate. We conducted two complementary studies: (1) a mixed-methods survey (N=142) employing a 2 (source: Human vs. LLM) $\times$ 2 (label: Human vs. AI) $\times$ 3 (type: General, Symptom, Treatment) design, and (2) a within-subjects lab study (N=40) incorporating eye-tracking and physiological sensing (ECG, EDA, skin temperature). Participants were presented with health information varying by source-label combinations and asked to rate their trust, while their gaze behavior and physiological signals were recorded. We found that LLM-generated information was trusted more than human-generated content, whereas information labeled as human was trusted more than that labeled as AI. Trust remained consistent across information types. Eye-tracking and physiological responses varied significantly by source and label. Machine learning models trained on these behavioral and physiological features predicted binary self-reported trust levels with 73% accuracy and information source with 65% accuracy. Our findings demonstrate that adding transparency labels to online health information modulates trust. Behavioral and physiological features show potential to verify trust perceptions and indicate if additional transparency is needed.
Similar Papers
Inferring trust in recommendation systems from brain, behavioural, and physiological data
Human-Computer Interaction
Helps AI learn how much people trust it.
Using Physiological Measures, Gaze, and Facial Expressions to Model Human Trust in a Robot Partner
Robotics
Helps robots know when people trust them.
Trust in AI emerges from distrust in humans: A machine learning study on decision-making guidance
Human-Computer Interaction
People trust computers more when they don't trust people.