Causal Reasoning in Pieces: Modular In-Context Learning for Causal Discovery
By: Kacper Kadziolka, Saber Salehkaleybar
Potential Business Impact:
Helps computers understand cause and effect better.
Causal inference remains a fundamental challenge for large language models. Recent advances in internal reasoning with large language models have sparked interest in whether state-of-the-art reasoning models can robustly perform causal discovery-a task where conventional models often suffer from severe overfitting and near-random performance under data perturbations. We study causal discovery on the Corr2Cause benchmark using the emergent OpenAI's o-series and DeepSeek-R model families and find that these reasoning-first architectures achieve significantly greater native gains than prior approaches. To capitalize on these strengths, we introduce a modular in-context pipeline inspired by the Tree-of-Thoughts and Chain-of-Thoughts methodologies, yielding nearly three-fold improvements over conventional baselines. We further probe the pipeline's impact by analyzing reasoning chain length, complexity, and conducting qualitative and quantitative comparisons between conventional and reasoning models. Our findings suggest that while advanced reasoning models represent a substantial leap forward, carefully structured in-context frameworks are essential to maximize their capabilities and offer a generalizable blueprint for causal discovery across diverse domains.
Similar Papers
Do Large Language Models Reason Causally Like Us? Even Better?
Artificial Intelligence
Computers now reason like humans, sometimes better.
CRAwDAD: Causal Reasoning Augmentation with Dual-Agent Debate
Machine Learning (CS)
Computers argue to find the best cause and effect.
Assessing LLM Reasoning Through Implicit Causal Chain Discovery in Climate Discourse
Artificial Intelligence
Computers learn to explain how things happen.