Not too long do read: Evaluating LLM-generated extreme scientific summaries
By: Zhuoqi Lyu, Qing Ke
Potential Business Impact:
Helps computers write short summaries of science papers.
High-quality scientific extreme summary (TLDR) facilitates effective science communication. How do large language models (LLMs) perform in generating them? How are LLM-generated summaries different from those written by human experts? However, the lack of a comprehensive, high-quality scientific TLDR dataset hinders both the development and evaluation of LLMs' summarization ability. To address these, we propose a novel dataset, BiomedTLDR, containing a large sample of researcher-authored summaries from scientific papers, which leverages the common practice of including authors' comments alongside bibliography items. We then test popular open-weight LLMs for generating TLDRs based on abstracts. Our analysis reveals that, although some of them successfully produce humanoid summaries, LLMs generally exhibit a greater affinity for the original text's lexical choices and rhetorical structures, hence tend to be more extractive rather than abstractive in general, compared to humans. Our code and datasets are available at https://github.com/netknowledge/LLM_summarization (Lyu and Ke, 2025).
Similar Papers
Understanding LLM Reasoning for Abstractive Summarization
Computation and Language
Helps computers summarize stories more truthfully.
Generalization Bias in Large Language Model Summarization of Scientific Research
Computation and Language
AI chatbots often twist science facts too much.
A Multi-Task Evaluation of LLMs' Processing of Academic Text Input
Computation and Language
Computers can't yet judge science papers well.