Position on LLM-Assisted Peer Review: Addressing Reviewer Gap through Mentoring and Feedback
By: JungMin Yun , JuneHyoung Kwon , MiHyeon Kim and more
Potential Business Impact:
Helps scientists write better reviews for research.
The rapid expansion of AI research has intensified the Reviewer Gap, threatening the peer-review sustainability and perpetuating a cycle of low-quality evaluations. This position paper critiques existing LLM approaches that automatically generate reviews and argues for a paradigm shift that positions LLMs as tools for assisting and educating human reviewers. We define the core principles of high-quality peer review and propose two complementary systems grounded in these foundations: (i) an LLM-assisted mentoring system that cultivates reviewers' long-term competencies, and (ii) an LLM-assisted feedback system that helps reviewers refine the quality of their reviews. This human-centered approach aims to strengthen reviewer expertise and contribute to building a more sustainable scholarly ecosystem.
Similar Papers
The AI Imperative: Scaling High-Quality Peer Review in Machine Learning
Artificial Intelligence
AI helps scientists check research faster.
Can LLM feedback enhance review quality? A randomized study of 20K reviews at ICLR 2025
Artificial Intelligence
Helps AI paper reviewers write better feedback.
Human and Machine: How Software Engineers Perceive and Engage with AI-Assisted Code Reviews Compared to Their Peers
Software Engineering
AI helps review computer code, but people still decide.