Realizing LLMs' Causal Potential Requires Science-Grounded, Novel Benchmarks
By: Ashutosh Srivastava , Lokesh Nagalapatti , Gautam Jajoo and more
Potential Business Impact:
Helps AI understand cause and effect better.
Recent claims of strong performance by Large Language Models (LLMs) on causal discovery are undermined by a key flaw: many evaluations rely on benchmarks likely included in pretraining corpora. Thus, apparent success suggests that LLM-only methods, which ignore observational data, outperform classical statistical approaches. We challenge this narrative by asking: Do LLMs truly reason about causal structure, and how can we measure it without memorization concerns? Can they be trusted for real-world scientific discovery? We argue that realizing LLMs' potential for causal analysis requires two shifts: (P.1) developing robust evaluation protocols based on recent scientific studies to guard against dataset leakage, and (P.2) designing hybrid methods that combine LLM-derived knowledge with data-driven statistics. To address P.1, we encourage evaluating discovery methods on novel, real-world scientific studies. We outline a practical recipe for extracting causal graphs from recent publications released after an LLM's training cutoff, ensuring relevance and preventing memorization while capturing both established and novel relations. Compared to benchmarks like BNLearn, where LLMs achieve near-perfect accuracy, they perform far worse on our curated graphs, underscoring the need for statistical grounding. Supporting P.2, we show that using LLM predictions as priors for the classical PC algorithm significantly improves accuracy over both LLM-only and purely statistical methods. We call on the community to adopt science-grounded, leakage-resistant benchmarks and invest in hybrid causal discovery methods suited to real-world inquiry.
Similar Papers
CARE: Turning LLMs Into Causal Reasoning Expert
Machine Learning (CS)
Teaches computers to understand cause and effect.
Benchmarking LLM Causal Reasoning with Scientifically Validated Relationships
Computation and Language
Teaches computers to understand why things happen.
Can LLMs Leverage Observational Data? Towards Data-Driven Causal Discovery with LLMs
Machine Learning (CS)
Helps computers find causes from data.