Score: 1

The Good, the Bad and the Constructive: Automatically Measuring Peer Review's Utility for Authors

Published: August 31, 2025 | arXiv ID: 2509.04484v2

By: Abdelrahman Sadallah , Tim Baumgärtner , Iryna Gurevych and more

Potential Business Impact:

Helps computers give better feedback on writing.

Business Areas:
Usability Testing Data and Analytics, Design

Providing constructive feedback to paper authors is a core component of peer review. With reviewers increasingly having less time to perform reviews, automated support systems are required to ensure high reviewing quality, thus making the feedback in reviews useful for authors. To this end, we identify four key aspects of review comments (individual points in weakness sections of reviews) that drive the utility for authors: Actionability, Grounding & Specificity, Verifiability, and Helpfulness. To enable evaluation and development of models assessing review comments, we introduce the RevUtil dataset. We collect 1,430 human-labeled review comments and scale our data with 10k synthetically labeled comments for training purposes. The synthetic data additionally contains rationales, i.e., explanations for the aspect score of a review comment. Employing the RevUtil dataset, we benchmark fine-tuned models for assessing review comments on these aspects and generating rationales. Our experiments demonstrate that these fine-tuned models achieve agreement levels with humans comparable to, and in some cases exceeding, those of powerful closed models like GPT-4o. Our analysis further reveals that machine-generated reviews generally underperform human reviews on our four aspects.

Country of Origin
🇩🇪 🇦🇪 United Arab Emirates, Germany

Page Count
31 pages

Category
Computer Science:
Computation and Language