A Methodological Framework and Questionnaire for Investigating Perceived Algorithmic Fairness
By: Ahmed Abdal Shafi Rasel , Ahmed Mustafa Amlan , Tasmim Shajahan Mim and more
Potential Business Impact:
Shows how people in Bangladesh think AI is fair.
This study explores perceptions of fairness in algorithmic decision-making among users in Bangladesh through a comprehensive mixed-methods approach. By integrating quantitative survey data with qualitative interview insights, we examine how cultural, social, and contextual factors influence users' understanding of fairness, transparency, and accountability in AI systems. Our findings reveal nuanced attitudes toward human oversight, explanation mechanisms, and contestability, highlighting the importance of culturally aware design principles for equitable and trustworthy algorithmic systems. These insights contribute to ongoing discussions on algorithmic fairness by foregrounding perspectives from a non-Western context, thus broadening the global dialogue on ethical AI deployment.
Similar Papers
Algorithmic Fairness: Not a Purely Technical but Socio-Technical Property
Machine Learning (CS)
Makes AI fair for everyone, not just groups.
Argumentative Debates for Transparent Bias Detection [Technical Report]
Artificial Intelligence
Finds unfairness in AI by explaining its reasoning.
A Unifying Human-Centered AI Fairness Framework
Machine Learning (CS)
Helps AI treat everyone fairly, no matter what.