Improving Document Retrieval Coherence for Semantically Equivalent Queries
By: Stefano Campese, Alessandro Moschitti, Ivano Lauriola
Potential Business Impact:
Finds better answers even with different questions.
Dense Retrieval (DR) models have proven to be effective for Document Retrieval and Information Grounding tasks. Usually, these models are trained and optimized for improving the relevance of top-ranked documents for a given query. Previous work has shown that popular DR models are sensitive to the query and document lexicon: small variations of it may lead to a significant difference in the set of retrieved documents. In this paper, we propose a variation of the Multi-Negative Ranking loss for training DR that improves the coherence of models in retrieving the same documents with respect to semantically similar queries. The loss penalizes discrepancies between the top-k ranked documents retrieved for diverse but semantic equivalent queries. We conducted extensive experiments on various datasets, MS-MARCO, Natural Questions, BEIR, and TREC DL 19/20. The results show that (i) models optimizes by our loss are subject to lower sensitivity, and, (ii) interestingly, higher accuracy.
Similar Papers
Does Generative Retrieval Overcome the Limitations of Dense Retrieval?
Information Retrieval
Finds information by creating answers, not just searching.
Lightweight and Direct Document Relevance Optimization for Generative Information Retrieval
Information Retrieval
Finds better search results without complex math.
BiCA: Effective Biomedical Dense Retrieval with Citation-Aware Hard Negatives
Information Retrieval
Helps computers find science papers better.