Test-Time Scaling in Reasoning Models Is Not Effective for Knowledge-Intensive Tasks Yet
By: James Xu Zhao, Bryan Hooi, See-Kiong Ng
Potential Business Impact:
Makes AI less likely to make up facts.
Test-time scaling increases inference-time computation by allowing models to generate long reasoning chains, and has shown strong performance across many domains. However, in this work, we show that this approach is not yet effective for knowledge-intensive tasks, where high factual accuracy and low hallucination rates are essential. We conduct a comprehensive evaluation of test-time scaling using 12 reasoning models on two knowledge-intensive benchmarks. Our results reveal that increasing test-time computation does not consistently improve accuracy and, in many cases, it even leads to more hallucinations. We then analyze how extended reasoning affects hallucination behavior. We find that reduced hallucinations often result from the model choosing to abstain after thinking more, rather than from improved factual recall. Conversely, for some models, longer reasoning encourages attempts on previously unanswered questions, many of which result in hallucinations. Case studies show that extended reasoning can induce confirmation bias, leading to overconfident hallucinations. Despite these limitations, we observe that compared to non-thinking, enabling thinking remains beneficial. Code and data are available at https://github.com/XuZhao0/tts-knowledge
Similar Papers
Test-Time Scaling of Reasoning Models for Machine Translation
Computation and Language
Makes computer translators better at fixing their own mistakes.
Does Thinking More always Help? Understanding Test-Time Scaling in Reasoning Models
Artificial Intelligence
Thinking more makes computers worse at thinking.
Limits and Gains of Test-Time Scaling in Vision-Language Reasoning
Machine Learning (CS)
Makes AI better at understanding pictures and words.