Score: 2

Who Sees the Risk? Stakeholder Conflicts and Explanatory Policies in LLM-based Risk Assessment

Published: November 5, 2025 | arXiv ID: 2511.03152v1

By: Srishti Yadav , Jasmina Gajcin , Erik Miehling and more

BigTech Affiliations: IBM

Potential Business Impact:

Helps AI understand what people fear about it.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Understanding how different stakeholders perceive risks in AI systems is essential for their responsible deployment. This paper presents a framework for stakeholder-grounded risk assessment by using LLMs, acting as judges to predict and explain risks. Using the Risk Atlas Nexus and GloVE explanation method, our framework generates stakeholder-specific, interpretable policies that shows how different stakeholders agree or disagree about the same risks. We demonstrate our method using three real-world AI use cases of medical AI, autonomous vehicles, and fraud detection domain. We further propose an interactive visualization that reveals how and why conflicts emerge across stakeholder perspectives, enhancing transparency in conflict reasoning. Our results show that stakeholder perspectives significantly influence risk perception and conflict patterns. Our work emphasizes the importance of these stakeholder-aware explanations needed to make LLM-based evaluations more transparent, interpretable, and aligned with human-centered AI governance goals.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡©πŸ‡° United States, Denmark

Page Count
7 pages

Category
Computer Science:
Computation and Language