Understanding Ethical Practices in AI: Insights from a Cross-Role, Cross-Region Survey of AI Development Teams
By: Wilder Baldwin, Sepideh Ghanavati, Manuel Woersdoerfer
Potential Business Impact:
Helps make AI safer and more fair.
Recent advances in AI applications have raised growing concerns about the need for ethical guidelines and regulations to mitigate the risks posed by these technologies. In this paper, we present a mixed-method survey study - combining statistical and qualitative analyses - to examine the ethical perceptions, practices, and knowledge of individuals involved in various AI development roles. Our survey includes 414 participants from 43 countries, representing roles such as AI managers, analysts, developers, quality assurance professionals, and information security and privacy experts. The results reveal varying degrees of familiarity and experience with AI ethics principles, government initiatives, and risk mitigation strategies across roles, regions, and other demographic factors. Our findings highlight the importance of a collaborative, role-sensitive approach, involving diverse stakeholders in ethical decision-making throughout the AI development lifecycle. We advocate for developing tailored, inclusive solutions to address ethical challenges in AI development, and we propose future research directions and educational strategies to promote ethics-aware AI practices.
Similar Papers
AI Ethics Education in India: A Syllabus-Level Review of Computing Courses
Computers and Society
Teaches computer students about AI fairness.
Diversity and Inclusion in AI: Insights from a Survey of AI/ML Practitioners
Computers and Society
Makes AI fair and trustworthy for everyone.
Toward Ethical AI: A Qualitative Analysis of Stakeholder Perspectives
Computers and Society
Makes AI safer and more trustworthy for everyone.