Score: 0

Sentence-Anchored Gist Compression for Long-Context LLMs

Published: November 11, 2025 | arXiv ID: 2511.08128v1

By: Dmitrii Tarasov, Elizaveta Goncharova, Kuznetsov Andrey

Potential Business Impact:

Makes computers understand longer stories with less effort.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This work investigates context compression for Large Language Models (LLMs) using learned compression tokens to reduce the memory and computational demands of processing long sequences. We demonstrate that pre-trained LLMs can be fine-tuned to compress their context by factors of 2x to 8x without significant performance degradation, as evaluated on both short-context and long-context benchmarks. Furthermore, in experiments on a 3-billion-parameter LLaMA model, our method achieves results on par with alternative compression techniques while attaining higher compression ratios.

Page Count
8 pages

Category
Computer Science:
Computation and Language