LLM Driven Processes to Foster Explainable AI
By: Marcel Pehlke, Marc Jansen
Potential Business Impact:
Helps computers make smart choices with clear steps.
We present a modular, explainable LLM-agent pipeline for decision support that externalizes reasoning into auditable artifacts. The system instantiates three frameworks: Vester's Sensitivity Model (factor set, signed impact matrix, systemic roles, feedback loops); normal-form games (strategies, payoff matrix, equilibria); and sequential games (role-conditioned agents, tree construction, backward induction), with swappable modules at every step. LLM components (default: GPT-5) are paired with deterministic analyzers for equilibria and matrix-based role classification, yielding traceable intermediates rather than opaque outputs. In a real-world logistics case (100 runs), mean factor alignment with a human baseline was 55.5\% over 26 factors and 62.9\% on the transport-core subset; role agreement over matches was 57\%. An LLM judge using an eight-criterion rubric (max 100) scored runs on par with a reconstructed human baseline. Configurable LLM pipelines can thus mimic expert workflows with transparent, inspectable steps.
Similar Papers
Increasing AI Explainability by LLM Driven Standard Processes
Artificial Intelligence
Makes AI decisions clear and trustworthy.
From Theory to Practice: Real-World Use Cases on Trustworthy LLM-Driven Process Modeling, Prediction and Automation
Software Engineering
AI helps businesses manage work better.
Evaluation of LLMs for Process Model Analysis and Optimization
Artificial Intelligence
Helps computers find mistakes in business plans.