Transparent Adaptive Learning via Data-Centric Multimodal Explainable AI
By: Maryam Mosleh, Marie Devlin, Ellis Solaiman
Potential Business Impact:
Helps computers explain their answers like a teacher.
Artificial intelligence-driven adaptive learning systems are reshaping education through data-driven adaptation of learning experiences. Yet many of these systems lack transparency, offering limited insight into how decisions are made. Most explainable AI (XAI) techniques focus on technical outputs but neglect user roles and comprehension. This paper proposes a hybrid framework that integrates traditional XAI techniques with generative AI models and user personalisation to generate multimodal, personalised explanations tailored to user needs. We redefine explainability as a dynamic communication process tailored to user roles and learning goals. We outline the framework's design, key XAI limitations in education, and research directions on accuracy, fairness, and personalisation. Our aim is to move towards explainable AI that enhances transparency while supporting user-centred experiences.
Similar Papers
Decoding the Multimodal Maze: A Systematic Review on the Adoption of Explainability in Multimodal Attention-based Models
Machine Learning (CS)
Helps understand how AI uses different information.
From Explainable to Explanatory Artificial Intelligence: Toward a New Paradigm for Human-Centered Explanations through Generative AI
Artificial Intelligence
AI explains decisions like a helpful friend.
On the Design and Evaluation of Human-centered Explainable AI Systems: A Systematic Review and Taxonomy
Artificial Intelligence
Helps people understand how smart computers make choices.