Score: 2

Tagging-Augmented Generation: Assisting Language Models in Finding Intricate Knowledge In Long Contexts

Published: October 27, 2025 | arXiv ID: 2510.22956v1

By: Anwesan Pal , Karen Hovsepian , Tinghao Guo and more

BigTech Affiliations: Amazon

Potential Business Impact:

Helps computers understand long stories better.

Business Areas:
Semantic Search Internet Services

Recent investigations into effective context lengths of modern flagship large language models (LLMs) have revealed major limitations in effective question answering (QA) and reasoning over long and complex contexts for even the largest and most impressive cadre of models. While approaches like retrieval-augmented generation (RAG) and chunk-based re-ranking attempt to mitigate this issue, they are sensitive to chunking, embedding and retrieval strategies and models, and furthermore, rely on extensive pre-processing, knowledge acquisition and indexing steps. In this paper, we propose Tagging-Augmented Generation (TAG), a lightweight data augmentation strategy that boosts LLM performance in long-context scenarios, without degrading and altering the integrity and composition of retrieved documents. We validate our hypothesis by augmenting two challenging and directly relevant question-answering benchmarks -- NoLima and NovelQA -- and show that tagging the context or even just adding tag definitions into QA prompts leads to consistent performance gains over the baseline -- up to 17% for 32K token contexts, and 2.9% in complex reasoning question-answering for multi-hop queries requiring knowledge across a wide span of text. Additional details are available at https://sites.google.com/view/tag-emnlp.

Country of Origin
🇺🇸 United States

Page Count
12 pages

Category
Computer Science:
Computation and Language