Exploring Human Perceptions of AI Responses: Insights from a Mixed-Methods Study on Risk Mitigation in Generative Models
By: Heloisa Candello , Muneeza Azmat , Uma Sushmitha Gunturi and more
Potential Business Impact:
Makes AI safer by checking its answers.
With the rapid uptake of generative AI, investigating human perceptions of generated responses has become crucial. A major challenge is their `aptitude' for hallucinating and generating harmful contents. Despite major efforts for implementing guardrails, human perceptions of these mitigation strategies are largely unknown. We conducted a mixed-method experiment for evaluating the responses of a mitigation strategy across multiple-dimensions: faithfulness, fairness, harm-removal capacity, and relevance. In a within-subject study design, 57 participants assessed the responses under two conditions: harmful response plus its mitigation and solely mitigated response. Results revealed that participants' native language, AI work experience, and annotation familiarity significantly influenced evaluations. Participants showed high sensitivity to linguistic and contextual attributes, penalizing minor grammar errors while rewarding preserved semantic contexts. This contrasts with how language is often treated in the quantitative evaluation of LLMs. We also introduced new metrics for training and evaluating mitigation strategies and insights for human-AI evaluation studies.
Similar Papers
Modeling Human Responses to Multimodal AI Content
Artificial Intelligence
Helps AI understand how people react to fake news.
Detecting the Use of Generative AI in Crowdsourced Surveys: Implications for Data Integrity
Human-Computer Interaction
Finds fake answers in online surveys.
Detecting the Use of Generative AI in Crowdsourced Surveys: Implications for Data Integrity
Human-Computer Interaction
Finds fake answers in online surveys.