Score: 0

Explaining Decisions in ML Models: a Parameterized Complexity Analysis (Part I)

Published: November 5, 2025 | arXiv ID: 2511.03545v1

By: Sebastian Ordyniak , Giacomo Paesani , Mateusz Rychlicki and more

Potential Business Impact:

Makes AI models easier to understand and trust.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

This paper presents a comprehensive theoretical investigation into the parameterized complexity of explanation problems in various machine learning (ML) models. Contrary to the prevalent black-box perception, our study focuses on models with transparent internal mechanisms. We address two principal types of explanation problems: abductive and contrastive, both in their local and global variants. Our analysis encompasses diverse ML models, including Decision Trees, Decision Sets, Decision Lists, Boolean Circuits, and ensembles thereof, each offering unique explanatory challenges. This research fills a significant gap in explainable AI (XAI) by providing a foundational understanding of the complexities of generating explanations for these models. This work provides insights vital for further research in the domain of XAI, contributing to the broader discourse on the necessity of transparency and accountability in AI systems.

Country of Origin
🇦🇹 Austria

Page Count
28 pages

Category
Computer Science:
Artificial Intelligence