Benchmarking LLM Causal Reasoning with Scientifically Validated Relationships
By: Donggyu Lee , Sungwon Park , Yerin Hwang and more
Potential Business Impact:
Teaches computers to understand why things happen.
Causal reasoning is fundamental for Large Language Models (LLMs) to understand genuine cause-and-effect relationships beyond pattern matching. Existing benchmarks suffer from critical limitations such as reliance on synthetic data and narrow domain coverage. We introduce a novel benchmark constructed from casually identified relationships extracted from top-tier economics and finance journals, drawing on rigorous methodologies including instrumental variables, difference-in-differences, and regression discontinuity designs. Our benchmark comprises 40,379 evaluation items covering five task types across domains such as health, environment, technology, law, and culture. Experimental results on eight state-of-the-art LLMs reveal substantial limitations, with the best model achieving only 57.6\% accuracy. Moreover, model scale does not consistently translate to superior performance, and even advanced reasoning models struggle with fundamental causal relationship identification. These findings underscore a critical gap between current LLM capabilities and demands of reliable causal reasoning in high-stakes applications.
Similar Papers
Realizing LLMs' Causal Potential Requires Science-Grounded, Novel Benchmarks
Machine Learning (CS)
Helps AI understand cause and effect better.
A Survey on Enhancing Causal Reasoning Ability of Large Language Models
Computation and Language
Teaches computers to understand cause and effect.
CausalVLBench: Benchmarking Visual Causal Reasoning in Large Vision-Language Models
Machine Learning (CS)
Helps computers understand cause and effect in pictures.