Score: 0

Beyond single-model XAI: aggregating multi-model explanations for enhanced trustworthiness

Published: October 13, 2025 | arXiv ID: 2510.11164v1

By: Ilaria Vascotto , Alex Rodriguez , Alessandro Bonaita and more

Potential Business Impact:

Makes AI decisions easier to trust.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The use of Artificial Intelligence (AI) models in real-world and high-risk applications has intensified the discussion about their trustworthiness and ethical usage, from both a technical and a legislative perspective. The field of eXplainable Artificial Intelligence (XAI) addresses this challenge by proposing explanations that bring to light the decision-making processes of complex black-box models. Despite being an essential property, the robustness of explanations is often an overlooked aspect during development: only robust explanation methods can increase the trust in the system as a whole. This paper investigates the role of robustness through the usage of a feature importance aggregation derived from multiple models ($k$-nearest neighbours, random forest and neural networks). Preliminary results showcase the potential in increasing the trustworthiness of the application, while leveraging multiple model's predictive power.

Country of Origin
🇮🇹 Italy

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)