Multimodal Peer Review Simulation with Actionable To-Do Recommendations for Community-Aware Manuscript Revisions
By: Mengze Hong , Di Jiang , Weiwei Zhao and more
Potential Business Impact:
Helps scientists improve papers with smart feedback.
While large language models (LLMs) offer promising capabilities for automating academic workflows, existing systems for academic peer review remain constrained by text-only inputs, limited contextual grounding, and a lack of actionable feedback. In this work, we present an interactive web-based system for multimodal, community-aware peer review simulation to enable effective manuscript revisions before paper submission. Our framework integrates textual and visual information through multimodal LLMs, enhances review quality via retrieval-augmented generation (RAG) grounded in web-scale OpenReview data, and converts generated reviews into actionable to-do lists using the proposed Action:Objective[\#] format, providing structured and traceable guidance. The system integrates seamlessly into existing academic writing platforms, providing interactive interfaces for real-time feedback and revision tracking. Experimental results highlight the effectiveness of the proposed system in generating more comprehensive and useful reviews aligned with expert standards, surpassing ablated baselines and advancing transparent, human-centered scholarly assistance.
Similar Papers
MMReview: A Multidisciplinary and Multimodal Benchmark for LLM-Based Peer Review Automation
Computation and Language
Tests AI to check science papers better.
MMReview: A Multidisciplinary and Multimodal Benchmark for LLM-Based Peer Review Automation
Computation and Language
Helps computers review science papers better.
The Good, the Bad and the Constructive: Automatically Measuring Peer Review's Utility for Authors
Computation and Language
Helps computers give better feedback on writing.