Score: 1

Counterfactual Simulatability of LLM Explanations for Generation Tasks

Published: May 27, 2025 | arXiv ID: 2505.21740v2

By: Marvin Limpijankit , Yanda Chen , Melanie Subbiah and more

Potential Business Impact:

Helps AI explain its answers better.

Business Areas:
Simulation Software

LLMs can be unpredictable, as even slight alterations to the prompt can cause the output to change in unexpected ways. Thus, the ability of models to accurately explain their behavior is critical, especially in high-stakes settings. One approach for evaluating explanations is counterfactual simulatability, how well an explanation allows users to infer the model's output on related counterfactuals. Counterfactual simulatability has been previously studied for yes/no question answering tasks. We provide a general framework for extending this method to generation tasks, using news summarization and medical suggestion as example use cases. We find that while LLM explanations do enable users to better predict LLM outputs on counterfactuals in the summarization setting, there is significant room for improvement for medical suggestion. Furthermore, our results suggest that the evaluation for counterfactual simulatability may be more appropriate for skill-based tasks as opposed to knowledge-based tasks.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
25 pages

Category
Computer Science:
Computation and Language