Score: 0

Increasing AI Explainability by LLM Driven Standard Processes

Published: November 10, 2025 | arXiv ID: 2511.07083v1

By: Marc Jansen, Marcel Pehlke

Potential Business Impact:

Makes AI decisions clear and trustworthy.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

This paper introduces an approach to increasing the explainability of artificial intelligence (AI) systems by embedding Large Language Models (LLMs) within standardized analytical processes. While traditional explainable AI (XAI) methods focus on feature attribution or post-hoc interpretation, the proposed framework integrates LLMs into defined decision models such as Question-Option-Criteria (QOC), Sensitivity Analysis, Game Theory, and Risk Management. By situating LLM reasoning within these formal structures, the approach transforms opaque inference into transparent and auditable decision traces. A layered architecture is presented that separates the reasoning space of the LLM from the explainable process space above it. Empirical evaluations show that the system can reproduce human-level decision logic in decentralized governance, systems analysis, and strategic reasoning contexts. The results suggest that LLM-driven standard processes provide a foundation for reliable, interpretable, and verifiable AI-supported decision making.

Page Count
15 pages

Category
Computer Science:
Artificial Intelligence