Who Sees the Risk? Stakeholder Conflicts and Explanatory Policies in LLM-based Risk Assessment
By: Srishti Yadav , Jasmina Gajcin , Erik Miehling and more
Potential Business Impact:
Helps AI understand what people fear about it.
Understanding how different stakeholders perceive risks in AI systems is essential for their responsible deployment. This paper presents a framework for stakeholder-grounded risk assessment by using LLMs, acting as judges to predict and explain risks. Using the Risk Atlas Nexus and GloVE explanation method, our framework generates stakeholder-specific, interpretable policies that shows how different stakeholders agree or disagree about the same risks. We demonstrate our method using three real-world AI use cases of medical AI, autonomous vehicles, and fraud detection domain. We further propose an interactive visualization that reveals how and why conflicts emerge across stakeholder perspectives, enhancing transparency in conflict reasoning. Our results show that stakeholder perspectives significantly influence risk perception and conflict patterns. Our work emphasizes the importance of these stakeholder-aware explanations needed to make LLM-based evaluations more transparent, interpretable, and aligned with human-centered AI governance goals.
Similar Papers
"Would You Want an AI Tutor?" Understanding Stakeholder Perceptions of LLM-based Systems in the Classroom
Computers and Society
Helps schools use AI tutors safely and well.
Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs
Artificial Intelligence
Makes AI understandable and trustworthy for everyone.
Explainable AI in Usable Privacy and Security: Challenges and Opportunities
Human-Computer Interaction
Makes AI explain privacy rules clearly and reliably.