Score: 1

Not too long do read: Evaluating LLM-generated extreme scientific summaries

Published: December 29, 2025 | arXiv ID: 2512.23206v1

By: Zhuoqi Lyu, Qing Ke

Potential Business Impact:

Helps computers write short summaries of science papers.

Business Areas:
Text Analytics Data and Analytics, Software

High-quality scientific extreme summary (TLDR) facilitates effective science communication. How do large language models (LLMs) perform in generating them? How are LLM-generated summaries different from those written by human experts? However, the lack of a comprehensive, high-quality scientific TLDR dataset hinders both the development and evaluation of LLMs' summarization ability. To address these, we propose a novel dataset, BiomedTLDR, containing a large sample of researcher-authored summaries from scientific papers, which leverages the common practice of including authors' comments alongside bibliography items. We then test popular open-weight LLMs for generating TLDRs based on abstracts. Our analysis reveals that, although some of them successfully produce humanoid summaries, LLMs generally exhibit a greater affinity for the original text's lexical choices and rhetorical structures, hence tend to be more extractive rather than abstractive in general, compared to humans. Our code and datasets are available at https://github.com/netknowledge/LLM_summarization (Lyu and Ke, 2025).

Country of Origin
🇭🇰 Hong Kong

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Computation and Language