Multi-criteria Rank-based Aggregation for Explainable AI
By: Sujoy Chatterjee, Everton Romanzini Colombo, Marcos Medeiros Raimundo
Potential Business Impact:
Makes AI explanations more trustworthy and consistent.
Explainability is crucial for improving the transparency of black-box machine learning models. With the advancement of explanation methods such as LIME and SHAP, various XAI performance metrics have been developed to evaluate the quality of explanations. However, different explainers can provide contrasting explanations for the same prediction, introducing trade-offs across conflicting quality metrics. Although available aggregation approaches improve robustness, reducing explanations' variability, very limited research employed a multi-criteria decision-making approach. To address this gap, this paper introduces a multi-criteria rank-based weighted aggregation method that balances multiple quality metrics simultaneously to produce an ensemble of explanation models. Furthermore, we propose rank-based versions of existing XAI metrics (complexity, faithfulness and stability) to better evaluate ranked feature importance explanations. Extensive experiments on publicly available datasets demonstrate the robustness of the proposed model across these metrics. Comparative analyses of various multi-criteria decision-making and rank aggregation algorithms showed that TOPSIS and WSUM are the best candidates for this use case.
Similar Papers
Evaluating the Effectiveness of XAI Techniques for Encoder-Based Language Models
Computation and Language
Helps understand how AI makes decisions.
Unifying VXAI: A Systematic Review and Framework for the Evaluation of Explainable AI
Machine Learning (CS)
Helps AI explain its decisions clearly.
Beyond single-model XAI: aggregating multi-model explanations for enhanced trustworthiness
Machine Learning (CS)
Makes AI decisions easier to trust.