How Do LLM-Generated Texts Impact Term-Based Retrieval Models?
By: Wei Huang , Keping Bi , Yinqiong Cai and more
Potential Business Impact:
Helps search engines find real writing better.
As more content generated by large language models (LLMs) floods into the Internet, information retrieval (IR) systems now face the challenge of distinguishing and handling a blend of human-authored and machine-generated texts. Recent studies suggest that neural retrievers may exhibit a preferential inclination toward LLM-generated content, while classic term-based retrievers like BM25 tend to favor human-written documents. This paper investigates the influence of LLM-generated content on term-based retrieval models, which are valued for their efficiency and robust generalization across domains. Our linguistic analysis reveals that LLM-generated texts exhibit smoother high-frequency and steeper low-frequency Zipf slopes, higher term specificity, and greater document-level diversity. These traits are aligned with LLMs being trained to optimize reader experience through diverse and precise expressions. Our study further explores whether term-based retrieval models demonstrate source bias, concluding that these models prioritize documents whose term distributions closely correspond to those of the queries, rather than displaying an inherent source bias. This work provides a foundation for understanding and addressing potential biases in term-based IR systems managing mixed-source content.
Similar Papers
Estimating the prevalence of LLM-assisted text in scholarly writing
Digital Libraries
Detects AI writing in research papers.
The Effect of Document Summarization on LLM-Based Relevance Judgments
Information Retrieval
Lets computers judge search results faster.
Demographically-Inspired Query Variants Using an LLM
Information Retrieval
Makes search engines work better for everyone.