MacRAG: Compress, Slice, and Scale-up for Multi-Scale Adaptive Context RAG
By: Woosang Lim , Zekun Li , Gyuwan Kim and more
Potential Business Impact:
Helps computers understand long texts better.
Long-context large language models (LC LLMs) combined with retrieval-augmented generation (RAG) hold strong potential for complex multi-hop and large-document tasks. However, existing RAG systems often suffer from imprecise retrieval, incomplete context coverage under constrained windows, and fragmented information from suboptimal context construction. We introduce Multi-scale Adaptive Context RAG (MacRAG), a hierarchical RAG framework that compresses and partitions documents into coarse-to-fine granularities, then adaptively merges relevant contexts through real-time chunk- and document-level expansions. By initiating with finest-level retrieval and progressively incorporating broader, higher-level context, MacRAG constructs effective query-specific long contexts, optimizing both precision and coverage. Evaluations on challenging LongBench expansions of HotpotQA, 2WikiMultihopQA, and Musique confirm MacRAG consistently surpasses baseline RAG pipelines in single- and multi-step generation using Llama-3.1-8B, Gemini-1.5-pro, and GPT-4o. Our results establish MacRAG as an efficient, scalable solution for real-world long-context, multi-hop reasoning. Our code is available at https://github.com/Leezekun/MacRAG.
Similar Papers
Enhancing RAG Efficiency with Adaptive Context Compression
Computation and Language
Makes AI answer questions faster and smarter.
REFRAG: Rethinking RAG based Decoding
Computation and Language
Makes AI answer questions much faster.
Q-RAG: Long Context Multi-step Retrieval via Value-based Embedder Training
Machine Learning (CS)
Helps computers answer hard questions by searching more.