Enhancing ML Model Interpretability: Leveraging Fine-Tuned Large Language Models for Better Understanding of AI
By: Jonas Bokstaller , Julia Altheimer , Julian Dormehl and more
Potential Business Impact:
Explains tricky computer decisions to anyone.
Across various sectors applications of eXplainableAI (XAI) gained momentum as the increasing black-boxedness of prevailing Machine Learning (ML) models became apparent. In parallel, Large Language Models (LLMs) significantly developed in their abilities to understand human language and complex patterns. By combining both, this paper presents a novel reference architecture for the interpretation of XAI through an interactive chatbot powered by a fine-tuned LLM. We instantiate the reference architecture in the context of State-of-Health (SoH) prediction for batteries and validate its design in multiple evaluation and demonstration rounds. The evaluation indicates that the implemented prototype enhances the human interpretability of ML, especially for users with less experience with XAI.
Similar Papers
LLMs for Explainable AI: A Comprehensive Survey
Artificial Intelligence
Makes confusing AI easy for people to understand.
Explainable artificial intelligence (XAI): from inherent explainability to large language models
Machine Learning (CS)
Lets people understand why computers make choices.
Leveraging Large Language Models for Explainable Activity Recognition in Smart Homes: A Critical Evaluation
Computation and Language
Helps smart homes explain what you're doing.