Beyond "Not Novel Enough": Enriching Scholarly Critique with LLM-Assisted Feedback
By: Osama Mohammed Afzal , Preslav Nakov , Tom Hope and more
Potential Business Impact:
Helps scientists judge new ideas faster.
Novelty assessment is a central yet understudied aspect of peer review, particularly in high volume fields like NLP where reviewer capacity is increasingly strained. We present a structured approach for automated novelty evaluation that models expert reviewer behavior through three stages: content extraction from submissions, retrieval and synthesis of related work, and structured comparison for evidence based assessment. Our method is informed by a large scale analysis of human written novelty reviews and captures key patterns such as independent claim verification and contextual reasoning. Evaluated on 182 ICLR 2025 submissions with human annotated reviewer novelty assessments, the approach achieves 86.5% alignment with human reasoning and 75.3% agreement on novelty conclusions - substantially outperforming existing LLM based baselines. The method produces detailed, literature aware analyses and improves consistency over ad hoc reviewer judgments. These results highlight the potential for structured LLM assisted approaches to support more rigorous and transparent peer review without displacing human expertise. Data and code are made available.
Similar Papers
Beyond "Not Novel Enough": Enriching Scholarly Critique with LLM-Assisted Feedback
Computation and Language
Helps science papers get reviewed faster.
LLM-REVal: Can We Trust LLM Reviewers Yet?
Computation and Language
AI reviewers unfairly favor AI-written papers.
Unveiling the Merits and Defects of LLMs in Automatic Review Generation for Scientific Papers
Computation and Language
Helps computers write better science paper reviews.