Understanding LLM Reasoning for Abstractive Summarization
By: Haohan Yuan, Siu Cheung Hui, Haopeng Zhang
Potential Business Impact:
Helps computers summarize stories more truthfully.
While the reasoning capabilities of Large Language Models (LLMs) excel in analytical tasks such as mathematics and code generation, their utility for abstractive summarization remains widely assumed but largely unverified. To bridge this gap, we first tailor general reasoning strategies to the summarization domain. We then conduct a systematic, large scale comparative study of 8 reasoning strategies and 3 Large Reasoning Models (LRMs) across 8 diverse datasets, assessing both summary quality and faithfulness. Our findings show that reasoning is not a universal solution and its effectiveness is highly dependent on the specific strategy and context. Specifically, we observe a trade-off between summary quality and factual faithfulness: explicit reasoning strategies tend to improve fluency at the expense of factual grounding, while implicit reasoning in LRMs exhibits the inverse pattern. Furthermore, increasing an LRM's internal reasoning budget does not improve, and can even hurt, factual consistency, suggesting that effective summarization demands faithful compression rather than creative over-thinking.
Similar Papers
AbstRaL: Augmenting LLMs' Reasoning by Reinforcing Abstract Thinking
Computation and Language
Teaches computers to think smarter, not just memorize.
Human-Level Reasoning: A Comparative Study of Large Language Models on Logical and Abstract Reasoning
Artificial Intelligence
Tests if AI can think like a person.
AbstRaL: Augmenting LLMs' Reasoning by Reinforcing Abstract Thinking
Computation and Language
Teaches computers to solve math problems better.