Remembering Unequally: Global and Disciplinary Bias in LLM-Generated Co-Authorship Networks
By: Ghazal Kalhor, Afra Mashhadi
Potential Business Impact:
Finds unfairness in AI's science summaries.
Ongoing breakthroughs in Large Language Models (LLMs) are reshaping search and recommendation platforms at their core. While this shift unlocks powerful new scientometric tools, it also exposes critical fairness and bias issues that could erode the integrity of the information ecosystem. Additionally, as LLMs become more integrated into web-based searches for scholarly tools, their ability to generate summarized research work based on memorized data introduces new dimensions to these challenges. The extent of memorization in LLMs can impact the accuracy and fairness of the co-authorship networks they produce, potentially reflecting and amplifying existing biases within the scientific community and across different regions. This study critically examines the impact of LLM memorization on the co-authorship networks. To this end, we assess memorization effects across three prominent models, DeepSeek R1, Llama 4 Scout, and Mixtral 8x7B, analyzing how memorization-driven outputs vary across academic disciplines and world regions. While our global analysis reveals a consistent bias favoring highly cited researchers, this pattern is not uniformly observed. Certain disciplines, such as Clinical Medicine, and regions, including parts of Africa, show more balanced representation, pointing to areas where LLM training data may reflect greater equity. These findings underscore both the risks and opportunities in deploying LLMs for scholarly discovery.
Similar Papers
Where Should I Study? Biased Language Models Decide! Evaluating Fairness in LMs for Academic Recommendations
Computation and Language
AI unfairly favors rich countries and men.
Justice in Judgment: Unveiling (Hidden) Bias in LLM-assisted Peer Reviews
Computers and Society
Finds AI reviews unfairly favor famous schools.
Justice in Judgment: Unveiling (Hidden) Bias in LLM-assisted Peer Reviews
Computers and Society
AI reviews unfairly favor famous schools and some genders.