Explaining Decisions in ML Models: a Parameterized Complexity Analysis (Part I)
By: Sebastian Ordyniak , Giacomo Paesani , Mateusz Rychlicki and more
Potential Business Impact:
Makes AI models easier to understand and trust.
This paper presents a comprehensive theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models. Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms. We address two principal types of explanation problems: abductive and contrastive, both in their local and global variants. Our analysis encompasses diverse ML models, including Decision Trees, Decision Sets, Decision Lists, Boolean Circuits, and ensembles thereof, each offering unique explanatory challenges. This research fills a significant gap in explainable AI (XAI) by providing a foundational understanding of the complexities of generating explanations for these models. This work provides insights vital for further research in the domain of XAI, contributing to the broader discourse on the necessity of transparency and accountability in AI systems.
Similar Papers
Onto-Epistemological Analysis of AI Explanations
Artificial Intelligence
Makes AI decisions understandable and trustworthy.
Beware of "Explanations" of AI
Machine Learning (CS)
Makes AI explanations safer and more trustworthy.
From Explainable to Explanatory Artificial Intelligence: Toward a New Paradigm for Human-Centered Explanations through Generative AI
Artificial Intelligence
AI explains decisions like a helpful friend.