Score: 1

Concept than Document: Context Compression via AMR-based Conceptual Entropy

Published: November 24, 2025 | arXiv ID: 2511.18832v1

By: Kaize Shi , Xueyao Sun , Xiaohui Tao and more

Potential Business Impact:

Makes AI understand long texts by removing extra words.

Business Areas:
Semantic Search Internet Services

Large Language Models (LLMs) face information overload when handling long contexts, particularly in Retrieval-Augmented Generation (RAG) where extensive supporting documents often introduce redundant content. This issue not only weakens reasoning accuracy but also increases computational overhead. We propose an unsupervised context compression framework that exploits Abstract Meaning Representation (AMR) graphs to preserve semantically essential information while filtering out irrelevant text. By quantifying node-level entropy within AMR graphs, our method estimates the conceptual importance of each node, enabling the retention of core semantics. Specifically, we construct AMR graphs from raw contexts, compute the conceptual entropy of each node, and screen significant informative nodes to form a condensed and semantically focused context than raw documents. Experiments on the PopQA and EntityQuestions datasets show that our method outperforms vanilla and other baselines, achieving higher accuracy while substantially reducing context length. To the best of our knowledge, this is the first work introducing AMR-based conceptual entropy for context compression, demonstrating the potential of stable linguistic features in context engineering.

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Computation and Language