Score: 2

Remembering Unequally: Global and Disciplinary Bias in LLM-Generated Co-Authorship Networks

Published: November 1, 2025 | arXiv ID: 2511.00476v1

By: Ghazal Kalhor, Afra Mashhadi

BigTech Affiliations: University of Washington

Potential Business Impact:

Finds unfairness in AI's science summaries.

Business Areas:
Semantic Search Internet Services

Ongoing breakthroughs in Large Language Models (LLMs) are reshaping search and recommendation platforms at their core. While this shift unlocks powerful new scientometric tools, it also exposes critical fairness and bias issues that could erode the integrity of the information ecosystem. Additionally, as LLMs become more integrated into web-based searches for scholarly tools, their ability to generate summarized research work based on memorized data introduces new dimensions to these challenges. The extent of memorization in LLMs can impact the accuracy and fairness of the co-authorship networks they produce, potentially reflecting and amplifying existing biases within the scientific community and across different regions. This study critically examines the impact of LLM memorization on the co-authorship networks. To this end, we assess memorization effects across three prominent models, DeepSeek R1, Llama 4 Scout, and Mixtral 8x7B, analyzing how memorization-driven outputs vary across academic disciplines and world regions. While our global analysis reveals a consistent bias favoring highly cited researchers, this pattern is not uniformly observed. Certain disciplines, such as Clinical Medicine, and regions, including parts of Africa, show more balanced representation, pointing to areas where LLM training data may reflect greater equity. These findings underscore both the risks and opportunities in deploying LLMs for scholarly discovery.

Country of Origin
🇺🇸 🇮🇷 Iran, United States

Page Count
25 pages

Category
Computer Science:
Computation and Language