An Argumentative Explanation Framework for Generalized Reason Model with Inconsistent Precedents
By: Wachara Fungwacharakorn, Gauvain Bourgne, Ken Satoh
Potential Business Impact:
Helps AI understand laws with messy rules.
Precedential constraint is one foundation of case-based reasoning in AI and Law. It generally assumes that the underlying set of precedents must be consistent. To relax this assumption, a generalized notion of the reason model has been introduced. While several argumentative explanation approaches exist for reasoning with precedents based on the traditional consistent reason model, there has been no corresponding argumentative explanation method developed for this generalized reasoning framework accommodating inconsistent precedents. To address this question, this paper examines an extension of the derivation state argumentation framework (DSA-framework) to explain the reasoning according to the generalized notion of the reason model.
Similar Papers
Defending the Hierarchical Result Models of Precedential Constraint
Artificial Intelligence
Helps computers make better decisions in tricky cases.
A Framework for Causal Concept-based Model Explanations
Artificial Intelligence
Explains how AI makes decisions using simple ideas.
Comparative Expressivity for Structured Argumentation Frameworks with Uncertain Rules and Premises
Artificial Intelligence
Makes computer arguments more believable with uncertainty.