Designing with Culture: How Social Norms Shape Trust and Preference in Health Chatbots
By: Arpita Wadhwa, Aditya Vashistha, Mohit Jain
Potential Business Impact:
AI helps health workers trust advice better.
AI-driven chatbots are increasingly used to support community health workers (CHWs) in developing regions, yet little is known about how cultural framings in chatbot design shape trust in collectivist contexts where decisions are rarely made in isolation. This paper examines how CHWs in rural India responded to chatbots that delivered identical health content but varied in one specific cultural lever -- social norms. Through a mixed-methods study with 61 ASHAs who compared four normative framings -- neutral, descriptive, narrative identity, and injunctive authority -- we (1) analyze how framings influence preferences and trust, and (2) compare effects across low- and high-ambiguity scenarios. Results show that narrative framings were most preferred but encouraged uncritical overreliance, while authority framings were least preferred yet supported calibrated trust. We conclude with design recommendations for dynamic framing strategies that adapt to context and argue for calibrated trust -- following correct advice and resisting incorrect advice -- as a critical evaluation metric for safe, culturally-grounded AI.
Similar Papers
AI Ethics and Social Norms: Exploring ChatGPT's Capabilities From What to How
Computers and Society
Checks if AI like ChatGPT is fair and safe.
A Crowdsourced Study of ChatBot Influence in Value-Driven Decision Making Scenarios
Human-Computer Interaction
Chatbots change minds with just how they talk.
Exploring Artificial Intelligence and Culture: Methodology for a comparative study of AI's impact on norms, trust, and problem-solving across academic and business environments
Human-Computer Interaction
Shows how people and AI learn together.