CRAwDAD: Causal Reasoning Augmentation with Dual-Agent Debate
By: Finn G. Vamosi, Nils D. Forkert
Potential Business Impact:
Computers argue to find the best cause and effect.
When people reason about cause and effect, they often consider many competing "what if" scenarios before deciding which explanation fits best. Analogously, advanced language models capable of causal inference can consider multiple interventions and counterfactuals to judge the validity of causal claims. Crucially, this type of reasoning is less like a single calculation and more like an internal dialogue between alternative hypotheses. In this paper, we make this dialogue explicit through a dual-agent debate framework where one model provides a structured causal inference, and the other critically examines this reasoning for logical flaws. When disagreements arise, agents attempt to persuade each other, challenging each other's logic and revising their conclusions until they converge on a mutually agreed answer. To take advantage of this deliberative process, we specifically use reasoning language models, whose strengths in both causal inference and adversarial debate remain under-explored relative to standard large language models. We evaluate our approach on the CLadder dataset, a benchmark linking natural language questions to formally defined causal graphs across all three rungs of Pearl's ladder of causation. With Qwen3 and DeepSeek-R1 as debater agents, we demonstrate that multi-agent debate improves DeepSeek-R1's overall accuracy in causal inference from 78.03% to 87.45%, with the counterfactual category specifically improving from 67.94% to 80.04% accuracy. Similarly, Qwen3's overall accuracy improves from 84.16% to 89.41%, and counterfactual questions from 71.53% to 80.35%, showing that strong models can still benefit greatly from debate with weaker agents. Our results highlight the potential of reasoning models as building blocks for multi-agent systems in causal inference, and demonstrate the importance of diverse perspectives in causal problem-solving.
Similar Papers
Can LLM Agents Really Debate? A Controlled Study of Multi-Agent Debate in Logical Reasoning
Multiagent Systems
Makes AI teams solve puzzles better by arguing.
Causal Reasoning in Pieces: Modular In-Context Learning for Causal Discovery
Artificial Intelligence
Helps computers understand cause and effect better.
Do Large Language Models Reason Causally Like Us? Even Better?
Artificial Intelligence
Computers now reason like humans, sometimes better.