SurveyForge: On the Outline Heuristics, Memory-Driven Generation, and Multi-dimensional Evaluation for Automated Survey Writing
By: Xiangchao Yan , Shiyang Feng , Jiakang Yuan and more
Potential Business Impact:
Makes AI write better research papers, like humans.
Survey paper plays a crucial role in scientific research, especially given the rapid growth of research publications. Recently, researchers have begun using LLMs to automate survey generation for better efficiency. However, the quality gap between LLM-generated surveys and those written by human remains significant, particularly in terms of outline quality and citation accuracy. To close these gaps, we introduce SurveyForge, which first generates the outline by analyzing the logical structure of human-written outlines and referring to the retrieved domain-related articles. Subsequently, leveraging high-quality papers retrieved from memory by our scholar navigation agent, SurveyForge can automatically generate and refine the content of the generated article. Moreover, to achieve a comprehensive evaluation, we construct SurveyBench, which includes 100 human-written survey papers for win-rate comparison and assesses AI-generated survey papers across three dimensions: reference, outline, and content quality. Experiments demonstrate that SurveyForge can outperform previous works such as AutoSurvey.
Similar Papers
SurveyGen-I: Consistent Scientific Survey Generation with Evolving Plans and Memory-Guided Writing
Computation and Language
Writes better science reports automatically.
SurveyBench: How Well Can LLM(-Agents) Write Academic Surveys?
Computation and Language
Tests if AI can write good research summaries.
SurveyEval: Towards Comprehensive Evaluation of LLM-Generated Academic Surveys
Computation and Language
Tests how well computers write survey answers.