Score: 0

A Multimodal XAI Framework for Trustworthy CNNs and Bias Detection in Deep Representation Learning

Published: October 14, 2025 | arXiv ID: 2510.12957v1

By: Noor Islam S. Mohammad

Potential Business Impact:

Makes AI fair and trustworthy for important jobs.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Standard benchmark datasets, such as MNIST, often fail to expose latent biases and multimodal feature complexities, limiting the trustworthiness of deep neural networks in high-stakes applications. We propose a novel multimodal Explainable AI (XAI) framework that unifies attention-augmented feature fusion, Grad-CAM++-based local explanations, and a Reveal-to-Revise feedback loop for bias detection and mitigation. Evaluated on multimodal extensions of MNIST, our approach achieves 93.2% classification accuracy, 91.6% F1-score, and 78.1% explanation fidelity (IoU-XAI), outperforming unimodal and non-explainable baselines. Ablation studies demonstrate that integrating interpretability with bias-aware learning enhances robustness and human alignment. Our work bridges the gap between performance, transparency, and fairness, highlighting a practical pathway for trustworthy AI in sensitive domains.

Country of Origin
🇺🇸 United States

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)