Accountability of Generative AI: Exploring a Precautionary Approach for "Artificially Created Nature"
By: Yuri Nakao
Potential Business Impact:
Makes AI explain its choices to keep it safe.
The rapid development of generative artificial intelligence (AI) technologies raises concerns about the accountability of sociotechnical systems. Current generative AI systems rely on complex mechanisms that make it difficult for even experts to fully trace the reasons behind the outputs. This paper first examines existing research on AI transparency and accountability and argues that transparency is not a sufficient condition for accountability but can contribute to its improvement. We then discuss that if it is not possible to make generative AI transparent, generative AI technology becomes ``artificially created nature'' in a metaphorical sense, and suggest using the precautionary principle approach to consider AI risks. Finally, we propose that a platform for citizen participation is needed to address the risks of generative AI.
Similar Papers
Towards Responsible AI Music: an Investigation of Trustworthy Features for Creative Systems
Artificial Intelligence
Makes AI art fair and safe for everyone.
Approaches to Responsible Governance of GenAI in Organizations
Computers and Society
Guides companies to use AI safely and fairly.
A Study on the Framework for Evaluating the Ethics and Trustworthiness of Generative AI
Computers and Society
Makes AI safer and more trustworthy for everyone.