Score: 2

The Overlooked Role of Graded Relevance Thresholds in Multilingual Dense Retrieval

Published: January 7, 2026 | arXiv ID: 2601.04395v1

By: Tomer Wullach, Ori Shapira, Amir DN Cohen

Potential Business Impact:

Improves computer search by understanding how good answers are.

Business Areas:
Semantic Search Internet Services

Dense retrieval models are typically fine-tuned with contrastive learning objectives that require binary relevance judgments, even though relevance is inherently graded. We analyze how graded relevance scores and the threshold used to convert them into binary labels affect multilingual dense retrieval. Using a multilingual dataset with LLM-annotated relevance scores, we examine monolingual, multilingual mixture, and cross-lingual retrieval scenarios. Our findings show that the optimal threshold varies systematically across languages and tasks, often reflecting differences in resource level. A well-chosen threshold can improve effectiveness, reduce the amount of fine-tuning data required, and mitigate annotation noise, whereas a poorly chosen one can degrade performance. We argue that graded relevance is a valuable but underutilized signal for dense retrieval, and that threshold calibration should be treated as a principled component of the fine-tuning pipeline.


Page Count
13 pages

Category
Computer Science:
Information Retrieval