SurveyGen: Quality-Aware Scientific Survey Generation with Large Language Models
By: Tong Bao , Mir Tafseer Nayeem , Davood Rafiei and more
Potential Business Impact:
Helps computers write better science summaries.
Automatic survey generation has emerged as a key task in scientific document processing. While large language models (LLMs) have shown promise in generating survey texts, the lack of standardized evaluation datasets critically hampers rigorous assessment of their performance against human-written surveys. In this work, we present SurveyGen, a large-scale dataset comprising over 4,200 human-written surveys across diverse scientific domains, along with 242,143 cited references and extensive quality-related metadata for both the surveys and the cited papers. Leveraging this resource, we build QUAL-SG, a novel quality-aware framework for survey generation that enhances the standard Retrieval-Augmented Generation (RAG) pipeline by incorporating quality-aware indicators into literature retrieval to assess and select higher-quality source papers. Using this dataset and framework, we systematically evaluate state-of-the-art LLMs under varying levels of human involvement - from fully automatic generation to human-guided writing. Experimental results and human evaluations show that while semi-automatic pipelines can achieve partially competitive outcomes, fully automatic survey generation still suffers from low citation quality and limited critical analysis.
Similar Papers
SurveyGen-I: Consistent Scientific Survey Generation with Evolving Plans and Memory-Guided Writing
Computation and Language
Writes better science reports automatically.
Benchmarking Computer Science Survey Generation
Computation and Language
Helps computers write summaries of science papers.
SurveyEval: Towards Comprehensive Evaluation of LLM-Generated Academic Surveys
Computation and Language
Tests how well computers write survey answers.