From "Thinking" to "Justifying": Aligning High-Stakes Explainability with Professional Communication Standards
By: Chen Qian , Yimeng Wang , Yu Chen and more
Explainable AI (XAI) in high-stakes domains should help stakeholders trust and verify system outputs. Yet Chain-of-Thought methods reason before concluding, and logical gaps or hallucinations can yield conclusions that do not reliably align with their rationale. Thus, we propose "Result -> Justify", which constrains the output communication to present a conclusion before its structured justification. We introduce SEF (Structured Explainability Framework), operationalizing professional conventions (e.g., CREAC, BLUF) via six metrics for structure and grounding. Experiments across four tasks in three domains validate this approach: all six metrics correlate with correctness (r=0.20-0.42; p<0.001), and SEF achieves 83.9% accuracy (+5.3 over CoT). These results suggest structured justification can improve verifiability and may also improve reliability.
Similar Papers
Too Much to Trust? Measuring the Security and Cognitive Impacts of Explainability in AI-Driven SOCs
Cryptography and Security
Helps security experts trust computer threat warnings.
A Survey on Human-Centered Evaluation of Explainable AI Methods in Clinical Decision Support Systems
Machine Learning (CS)
Makes doctors trust computer health advice.
Position: Intelligent Coding Systems Should Write Programs with Justifications
Software Engineering
Explains how computer code works so anyone can understand.