Increasing AI Explainability by LLM Driven Standard Processes
By: Marc Jansen, Marcel Pehlke
Potential Business Impact:
Makes AI decisions clear and trustworthy.
This paper introduces an approach to increasing the explainability of artificial intelligence (AI) systems by embedding Large Language Models (LLMs) within standardized analytical processes. While traditional explainable AI (XAI) methods focus on feature attribution or post-hoc interpretation, the proposed framework integrates LLMs into defined decision models such as Question-Option-Criteria (QOC), Sensitivity Analysis, Game Theory, and Risk Management. By situating LLM reasoning within these formal structures, the approach transforms opaque inference into transparent and auditable decision traces. A layered architecture is presented that separates the reasoning space of the LLM from the explainable process space above it. Empirical evaluations show that the system can reproduce human-level decision logic in decentralized governance, systems analysis, and strategic reasoning contexts. The results suggest that LLM-driven standard processes provide a foundation for reliable, interpretable, and verifiable AI-supported decision making.
Similar Papers
LLM Driven Processes to Foster Explainable AI
Artificial Intelligence
Helps computers make smart choices with clear steps.
LLMs for Explainable AI: A Comprehensive Survey
Artificial Intelligence
Makes confusing AI easy for people to understand.
Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs
Artificial Intelligence
Makes AI understandable and trustworthy for everyone.