Justice in Judgment: Unveiling (Hidden) Bias in LLM-assisted Peer Reviews
By: Sai Suresh Marchala Vasu , Ivaxi Sheth , Hui-Po Wang and more
Potential Business Impact:
AI reviews unfairly favor famous schools and some genders.
The adoption of large language models (LLMs) is transforming the peer review process, from assisting reviewers in writing more detailed evaluations to generating entire reviews automatically. While these capabilities offer exciting opportunities, they also raise critical concerns about fairness and reliability. In this paper, we investigate bias in LLM-generated peer reviews by conducting controlled experiments on sensitive metadata, including author affiliation and gender. Our analysis consistently shows affiliation bias favoring institutions highly ranked on common academic rankings. Additionally, we find some gender preferences, which, even though subtle in magnitude, have the potential to compound over time. Notably, we uncover implicit biases that become more evident with token-based soft ratings.
Similar Papers
Justice in Judgment: Unveiling (Hidden) Bias in LLM-assisted Peer Reviews
Computers and Society
Finds AI reviews unfairly favor famous schools.
LLM-REVal: Can We Trust LLM Reviewers Yet?
Computation and Language
AI reviewers unfairly favor AI-written papers.
Prestige over merit: An adapted audit of LLM bias in peer review
Computers and Society
AI favors papers from famous schools.