Score: 1

Do explanations generalize across large reasoning models?

Published: January 16, 2026 | arXiv ID: 2601.11517v1

By: Koyena Pal, David Bau, Chandan Singh

BigTech Affiliations: Microsoft

Potential Business Impact:

Makes AI thinking clearer for other AIs.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large reasoning models (LRMs) produce a textual chain of thought (CoT) in the process of solving a problem, which serves as a potentially powerful tool to understand the problem by surfacing a human-readable, natural-language explanation. However, it is unclear whether these explanations generalize, i.e. whether they capture general patterns about the underlying problem rather than patterns which are esoteric to the LRM. This is a crucial question in understanding or discovering new concepts, e.g. in AI for science. We study this generalization question by evaluating a specific notion of generalizability: whether explanations produced by one LRM induce the same behavior when given to other LRMs. We find that CoT explanations often exhibit this form of generalization (i.e. they increase consistency between LRMs) and that this increased generalization is correlated with human preference rankings and post-training with reinforcement learning. We further analyze the conditions under which explanations yield consistent answers and propose a straightforward, sentence-level ensembling strategy that improves consistency. Taken together, these results prescribe caution when using LRM explanations to yield new insights and outline a framework for characterizing LRM explanation generalization.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
21 pages

Category
Computer Science:
Computation and Language