Score: 0

LLMs Struggle to Perform Counterfactual Reasoning with Parametric Knowledge

Published: June 15, 2025 | arXiv ID: 2506.15732v1

By: Khurram Yamin, Gaurav Ghosal, Bryan Wilder

Potential Business Impact:

Computers can't easily mix old and new facts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models have been shown to contain extensive world knowledge in their parameters, enabling impressive performance on many knowledge intensive tasks. However, when deployed in novel settings, LLMs often encounter situations where they must integrate parametric knowledge with new or unfamiliar information. In this work, we explore whether LLMs can combine knowledge in-context with their parametric knowledge through the lens of counterfactual reasoning. Through synthetic and real experiments in multi-hop reasoning problems, we show that LLMs generally struggle with counterfactual reasoning, often resorting to exclusively using their parametric knowledge. Moreover, we show that simple post-hoc finetuning can struggle to instill counterfactual reasoning ability -- often leading to degradation in stored parametric knowledge. Ultimately, our work reveals important limitations of current LLM's abilities to re-purpose parametric knowledge in novel settings.

Page Count
10 pages

Category
Computer Science:
Artificial Intelligence