Counterfactual Simulatability of LLM Explanations for Generation Tasks
By: Marvin Limpijankit , Yanda Chen , Melanie Subbiah and more
Potential Business Impact:
Helps AI explain its answers better.
LLMs can be unpredictable, as even slight alterations to the prompt can cause the output to change in unexpected ways. Thus, the ability of models to accurately explain their behavior is critical, especially in high-stakes settings. One approach for evaluating explanations is counterfactual simulatability, how well an explanation allows users to infer the model's output on related counterfactuals. Counterfactual simulatability has been previously studied for yes/no question answering tasks. We provide a general framework for extending this method to generation tasks, using news summarization and medical suggestion as example use cases. We find that while LLM explanations do enable users to better predict LLM outputs on counterfactuals in the summarization setting, there is significant room for improvement for medical suggestion. Furthermore, our results suggest that the evaluation for counterfactual simulatability may be more appropriate for skill-based tasks as opposed to knowledge-based tasks.
Similar Papers
Do LLM Self-Explanations Help Users Predict Model Behavior? Evaluating Counterfactual Simulatability with Pragmatic Perturbations
Computation and Language
Explanations help people guess how computers think.
Can LLMs Explain Themselves Counterfactually?
Computation and Language
Helps computers explain their thinking better.
Guiding LLMs to Generate High-Fidelity and High-Quality Counterfactual Explanations for Text Classification
Computation and Language
Makes AI explain its decisions with small changes.