LLMs Struggle to Perform Counterfactual Reasoning with Parametric Knowledge
By: Khurram Yamin, Gaurav Ghosal, Bryan Wilder
Potential Business Impact:
Computers can't easily mix old and new facts.
Large Language Models have been shown to contain extensive world knowledge in their parameters, enabling impressive performance on many knowledge intensive tasks. However, when deployed in novel settings, LLMs often encounter situations where they must integrate parametric knowledge with new or unfamiliar information. In this work, we explore whether LLMs can combine knowledge in-context with their parametric knowledge through the lens of counterfactual reasoning. Through synthetic and real experiments in multi-hop reasoning problems, we show that LLMs generally struggle with counterfactual reasoning, often resorting to exclusively using their parametric knowledge. Moreover, we show that simple post-hoc finetuning can struggle to instill counterfactual reasoning ability -- often leading to degradation in stored parametric knowledge. Ultimately, our work reveals important limitations of current LLM's abilities to re-purpose parametric knowledge in novel settings.
Similar Papers
On the Eligibility of LLMs for Counterfactual Reasoning: A Decompositional Study
Artificial Intelligence
Helps computers think about "what if" better.
Counterfactual reasoning: an analysis of in-context emergence
Computation and Language
Helps computers guess what happens if things change.
The Knowledge-Reasoning Dissociation: Fundamental Limitations of LLMs in Clinical Natural Language Inference
Artificial Intelligence
Computers can't yet use medical knowledge reliably.