Knowledge Compression via Question Generation: Enhancing Multihop Document Retrieval without Fine-tuning
By: Anvi Alex Eponon , Moein Shahiki-Tash , Ildar Batyrshin and more
Potential Business Impact:
Helps computers find answers by asking questions.
This study presents a question-based knowledge encoding approach that improves retrieval-augmented generation (RAG) systems without requiring fine-tuning or traditional chunking. We encode textual content using generated questions that span the lexical and semantic space, creating targeted retrieval cues combined with a custom syntactic reranking method. In single-hop retrieval over 109 scientific papers, our approach achieves a Recall@3 of 0.84, outperforming traditional chunking methods by 60 percent. We also introduce "paper-cards", concise paper summaries under 300 characters, which enhance BM25 retrieval, increasing MRR@3 from 0.56 to 0.85 on simplified technical queries. For multihop tasks, our reranking method reaches an F1 score of 0.52 with LLaMA2-Chat-7B on the LongBench 2WikiMultihopQA dataset, surpassing chunking and fine-tuned baselines which score 0.328 and 0.412 respectively. This method eliminates fine-tuning requirements, reduces retrieval latency, enables intuitive question-driven knowledge access, and decreases vector storage demands by 80%, positioning it as a scalable and efficient RAG alternative.
Similar Papers
Transforming Questions and Documents for Semantically Aligned Retrieval-Augmented Generation
Computation and Language
Answers hard questions by breaking them down.
Enhancing Document-Level Question Answering via Multi-Hop Retrieval-Augmented Generation with LLaMA 3
Computation and Language
Answers hard questions from long texts better.
FrugalRAG: Learning to retrieve and reason for multi-hop QA
Computation and Language
Answers questions using fewer searches.