Can LLM-Generated Textual Explanations Enhance Model Classification Performance? An Empirical Study
By: Mahdi Dhaini , Juraj Vladika , Ege Erdogan and more
Potential Business Impact:
Computers can now explain their answers without humans.
In the rapidly evolving field of Explainable Natural Language Processing (NLP), textual explanations, i.e., human-like rationales, are pivotal for explaining model predictions and enriching datasets with interpretable labels. Traditional approaches rely on human annotation, which is costly, labor-intensive, and impedes scalability. In this work, we present an automated framework that leverages multiple state-of-the-art large language models (LLMs) to generate high-quality textual explanations. We rigorously assess the quality of these LLM-generated explanations using a comprehensive suite of Natural Language Generation (NLG) metrics. Furthermore, we investigate the downstream impact of these explanations on the performance of pre-trained language models (PLMs) and LLMs across natural language inference tasks on two diverse benchmark datasets. Our experiments demonstrate that automated explanations exhibit highly competitive effectiveness compared to human-annotated explanations in improving model performance. Our findings underscore a promising avenue for scalable, automated LLM-based textual explanation generation for extending NLP datasets and enhancing model performance.
Similar Papers
Regularization Through Reasoning: Systematic Improvements in Language Model Classification via Explanation-Enhanced Fine-Tuning
Machine Learning (CS)
Makes AI better at choosing the right answer.
From latent factors to language: a user study on LLM-generated explanations for an inherently interpretable matrix-based recommender system
Artificial Intelligence
Helps people understand why computers suggest things.
Selecting the Right LLM for eGov Explanations
Computers and Society
Helps government explain things better to people.