Score: 1

LLM-Assisted Pseudo-Relevance Feedback

Published: January 16, 2026 | arXiv ID: 2601.11238v1

By: David Otero, Javier Parapar

Potential Business Impact:

Helps search engines find better results.

Business Areas:
Semantic Search Internet Services

Query expansion is a long-standing technique to mitigate vocabulary mismatch in ad hoc Information Retrieval. Pseudo-relevance feedback methods, such as RM3, estimate an expanded query model from the top-ranked documents, but remain vulnerable to topic drift when early results include noisy or tangential content. Recent approaches instead prompt Large Language Models to generate synthetic expansions or query variants. While effective, these methods risk hallucinations and misalignment with collection-specific terminology. We propose a hybrid alternative that preserves the robustness and interpretability of classical PRF while leveraging LLM semantic judgement. Our method inserts an LLM-based filtering stage prior to RM3 estimation: the LLM judges the documents in the initial top-$k$ ranking, and RM3 is computed only over those accepted as relevant. This simple intervention improves over blind PRF and a strong baseline across several datasets and metrics.

Country of Origin
🇪🇸 Spain

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Information Retrieval