When LLMs Disagree: Diagnosing Relevance Filtering Bias and Retrieval Divergence in SDG Search
By: William A. Ingram, Bipasha Banerjee, Edward A. Fox
Potential Business Impact:
Helps computers agree on important documents.
Large language models (LLMs) are increasingly used to assign document relevance labels in information retrieval pipelines, especially in domains lacking human-labeled data. However, different models often disagree on borderline cases, raising concerns about how such disagreement affects downstream retrieval. This study examines labeling disagreement between two open-weight LLMs, LLaMA and Qwen, on a corpus of scholarly abstracts related to Sustainable Development Goals (SDGs) 1, 3, and 7. We isolate disagreement subsets and examine their lexical properties, rank-order behavior, and classification predictability. Our results show that model disagreement is systematic, not random: disagreement cases exhibit consistent lexical patterns, produce divergent top-ranked outputs under shared scoring functions, and are distinguishable with AUCs above 0.74 using simple classifiers. These findings suggest that LLM-based filtering introduces structured variability in document retrieval, even under controlled prompting and shared ranking logic. We propose using classification disagreement as an object of analysis in retrieval evaluation, particularly in policy-relevant or thematic search tasks.
Similar Papers
Query-Document Dense Vectors for LLM Relevance Judgment Bias Analysis
Information Retrieval
Finds where AI makes mistakes judging information.
How Do LLM-Generated Texts Impact Term-Based Retrieval Models?
Information Retrieval
Helps search engines find real writing better.
Demographically-Inspired Query Variants Using an LLM
Information Retrieval
Makes search engines work better for everyone.