Score: 3

Causal Reasoning Favors Encoders: On The Limits of Decoder-Only Models

Published: December 11, 2025 | arXiv ID: 2512.10561v1

By: Amartya Roy , Elamparithy M , Kripabandhu Ghosh and more

BigTech Affiliations: Microsoft

Potential Business Impact:

Helps computers reason better, especially without words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In context learning (ICL) underpins recent advances in large language models (LLMs), although its role and performance in causal reasoning remains unclear. Causal reasoning demands multihop composition and strict conjunctive control, and reliance on spurious lexical relations of the input could provide misleading results. We hypothesize that, due to their ability to project the input into a latent space, encoder and encoder decoder architectures are better suited for said multihop conjunctive reasoning versus decoder only models. To do this, we compare fine-tuned versions of all the aforementioned architectures with zero and few shot ICL in both natural language and non natural language scenarios. We find that ICL alone is insufficient for reliable causal reasoning, often overfocusing on irrelevant input features. In particular, decoder only models are noticeably brittle to distributional shifts, while finetuned encoder and encoder decoder models can generalize more robustly across our tests, including the non natural language split. Both architectures are only matched or surpassed by decoder only architectures at large scales. We conclude by noting that for cost effective, short horizon robust causal reasoning, encoder or encoder decoder architectures with targeted finetuning are preferable.

Country of Origin
🇮🇳 🇺🇸 United States, India

Repos / Data Links

Page Count
30 pages

Category
Computer Science:
Computation and Language