Score: 1

On the Eligibility of LLMs for Counterfactual Reasoning: A Decompositional Study

Published: May 17, 2025 | arXiv ID: 2505.11839v1

By: Shuai Yang , Qi Yang , Luoxi Tang and more

Potential Business Impact:

Helps computers think about "what if" better.

Business Areas:
Simulation Software

Counterfactual reasoning has emerged as a crucial technique for generalizing the reasoning capabilities of large language models (LLMs). By generating and analyzing counterfactual scenarios, researchers can assess the adaptability and reliability of model decision-making. Although prior work has shown that LLMs often struggle with counterfactual reasoning, it remains unclear which factors most significantly impede their performance across different tasks and modalities. In this paper, we propose a decompositional strategy that breaks down the counterfactual generation from causality construction to the reasoning over counterfactual interventions. To support decompositional analysis, we investigate 11 datasets spanning diverse tasks, including natural language understanding, mathematics, programming, and vision-language tasks. Through extensive evaluations, we characterize LLM behavior across each decompositional stage and identify how modality type and intermediate reasoning influence performance. By establishing a structured framework for analyzing counterfactual reasoning, this work contributes to the development of more reliable LLM-based reasoning systems and informs future elicitation strategies.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡³ China, United States

Page Count
30 pages

Category
Computer Science:
Artificial Intelligence