AstroReview: An LLM-driven Multi-Agent Framework for Telescope Proposal Peer Review and Refinement
By: Yutong Wang , Yunxiang Xiao , Yonglin Tian and more
Competitive access to modern observatories has intensified as proposal volumes outpace available telescope time, making timely, consistent, and transparent peer review a critical bottleneck for the advancement of astronomy. Automating parts of this process is therefore both scientifically significant and operationally necessary to ensure fair allocation and reproducible decisions at scale. We present AstroReview, an open-source, agent-based framework that automates proposal review in three stages: (i) novelty and scientific merit, (ii) feasibility and expected yield, and (iii) meta-review and reliability verification. Task isolation and explicit reasoning traces curb hallucinations and improve transparency. Without any domain specific fine tuning, AstroReview used in our experiments only for the last stage, correctly identifies genuinely accepted proposals with an accuracy of 87%. The AstroReview in Action module replicates the review and refinement loop; with its integrated Proposal Authoring Agent, the acceptance rate of revised drafts increases by 66% after two iterations, showing that iterative feedback combined with automated meta-review and reliability verification delivers measurable quality gains. Together, these results point to a practical path toward scalable, auditable, and higher throughput proposal review for resource limited facilities.
Similar Papers
LatteReview: A Multi-Agent Framework for Systematic Review Automation Using Large Language Models
Computation and Language
Automates reading research papers to find answers faster.
LatteReview: A Multi-Agent Framework for Systematic Review Automation Using Large Language Models
Computation and Language
Automates research reviews, saving lots of time.
Agentic AutoSurvey: Let LLMs Survey LLMs
Information Retrieval
Helps scientists quickly understand lots of research.