Beyond single-model XAI: aggregating multi-model explanations for enhanced trustworthiness
By: Ilaria Vascotto , Alex Rodriguez , Alessandro Bonaita and more
Potential Business Impact:
Makes AI decisions easier to trust.
The use of Artificial Intelligence (AI) models in real-world and high-risk applications has intensified the discussion about their trustworthiness and ethical usage, from both a technical and a legislative perspective. The field of eXplainable Artificial Intelligence (XAI) addresses this challenge by proposing explanations that bring to light the decision-making processes of complex black-box models. Despite being an essential property, the robustness of explanations is often an overlooked aspect during development: only robust explanation methods can increase the trust in the system as a whole. This paper investigates the role of robustness through the usage of a feature importance aggregation derived from multiple models ($k$-nearest neighbours, random forest and neural networks). Preliminary results showcase the potential in increasing the trustworthiness of the application, while leveraging multiple model's predictive power.
Similar Papers
How can we trust opaque systems? Criteria for robust explanations in XAI
Machine Learning (CS)
Makes smart computer guesses understandable and trustworthy.
Onto-Epistemological Analysis of AI Explanations
Artificial Intelligence
Makes AI decisions understandable and trustworthy.
Beware of "Explanations" of AI
Machine Learning (CS)
Makes AI explanations safer and more trustworthy.