Score: 0

LLM Driven Processes to Foster Explainable AI

Published: November 10, 2025 | arXiv ID: 2511.07086v1

By: Marcel Pehlke, Marc Jansen

Potential Business Impact:

Helps computers make smart choices with clear steps.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We present a modular, explainable LLM-agent pipeline for decision support that externalizes reasoning into auditable artifacts. The system instantiates three frameworks: Vester's Sensitivity Model (factor set, signed impact matrix, systemic roles, feedback loops); normal-form games (strategies, payoff matrix, equilibria); and sequential games (role-conditioned agents, tree construction, backward induction), with swappable modules at every step. LLM components (default: GPT-5) are paired with deterministic analyzers for equilibria and matrix-based role classification, yielding traceable intermediates rather than opaque outputs. In a real-world logistics case (100 runs), mean factor alignment with a human baseline was 55.5\% over 26 factors and 62.9\% on the transport-core subset; role agreement over matches was 57\%. An LLM judge using an eight-criterion rubric (max 100) scored runs on par with a reconstructed human baseline. Configurable LLM pipelines can thus mimic expert workflows with transparent, inspectable steps.

Page Count
15 pages

Category
Computer Science:
Artificial Intelligence