Score: 0

Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives

Published: October 13, 2025 | arXiv ID: 2510.11079v1

By: Andrada Iulia Prajescu, Roberto Confalonieri

Potential Business Impact:

Explains why AI makes legal decisions.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Artificial Intelligence (AI) systems are increasingly deployed in legal contexts, where their opacity raises significant challenges for fairness, accountability, and trust. The so-called ``black box problem'' undermines the legitimacy of automated decision-making, as affected individuals often lack access to meaningful explanations. In response, the field of Explainable AI (XAI) has proposed a variety of methods to enhance transparency, ranging from example-based and rule-based techniques to hybrid and argumentation-based approaches. This paper promotes computational models of arguments and their role in providing legally relevant explanations, with particular attention to their alignment with emerging regulatory frameworks such as the EU General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AIA). We analyze the strengths and limitations of different explanation strategies, evaluate their applicability to legal reasoning, and highlight how argumentation frameworks -- by capturing the defeasible, contestable, and value-sensitive nature of law -- offer a particularly robust foundation for explainable legal AI. Finally, we identify open challenges and research directions, including bias mitigation, empirical validation in judicial settings, and compliance with evolving ethical and legal standards, arguing that computational argumentation is best positioned to meet both technical and normative requirements of transparency in the law domain.

Country of Origin
🇮🇹 Italy

Page Count
31 pages

Category
Computer Science:
Artificial Intelligence