Designing Culturally Aligned AI Systems For Social Good in Non-Western Contexts
By: Deepak Varuvel Dennison , Mohit Jain , Tanuja Ganu and more
Potential Business Impact:
Helps AI work well in different countries.
AI technologies are increasingly deployed in high-stakes domains such as education, healthcare, law, and agriculture to address complex challenges in non-Western contexts. This paper examines eight real-world deployments spanning seven countries and 18 languages, combining 17 interviews with AI developers and domain experts with secondary research. Our findings identify six cross-cutting factors - Language, Domain, Demography, Institution, Task, and Safety - that structured how systems were designed and deployed. These factors were shaped by sociocultural (diversity, practices), institutional (resources, policies), and technological (capabilities, limits) influences. We find that building AI systems required extensive collaboration between AI developers and domain experts. Notably, human resources proved more critical to achieving safe and effective systems in high-stakes domains than technological expertise alone. We present an analytical framework that synthesizes these dynamics and conclude with recommendations for designing AI for social good systems that are culturally grounded, equitable, and responsive to the needs of non-Western contexts.
Similar Papers
Cultural Dimensions of Artificial Intelligence Adoption: Empirical Insights for Wave 1 from a Multinational Longitudinal Pilot Study
Computers and Society
Cultures change how people use and trust AI.
Cross-cultural value alignment frameworks for responsible AI governance: Evidence from China-West comparative analysis
Computers and Society
Checks if AI understands different cultures fairly.
AI-Agents for Culturally Diverse Online Higher Education Environments
Computers and Society
AI helps online classes understand different cultures.