Score: 3

LMK > CLS: Landmark Pooling for Dense Embeddings

Published: January 29, 2026 | arXiv ID: 2601.21525v1

By: Meet Doshi , Aashka Trivedi , Vishwajeet Kumar and more

BigTech Affiliations: IBM

Potential Business Impact:

Makes computer searches better for long texts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Representation learning is central to many downstream tasks such as search, clustering, classification, and reranking. State-of-the-art sequence encoders typically collapse a variable-length token sequence to a single vector using a pooling operator, most commonly a special [CLS] token or mean pooling over token embeddings. In this paper, we identify systematic weaknesses of these pooling strategies: [CLS] tends to concentrate information toward the initial positions of the sequence and can under-represent distributed evidence, while mean pooling can dilute salient local signals, sometimes leading to worse short-context performance. To address these issues, we introduce Landmark (LMK) pooling, which partitions a sequence into chunks, inserts landmark tokens between chunks, and forms the final representation by mean-pooling the landmark token embeddings. This simple mechanism improves long-context extrapolation without sacrificing local salient features, at the cost of introducing a small number of special tokens. We empirically demonstrate that LMK pooling matches existing methods on short-context retrieval tasks and yields substantial improvements on long-context tasks, making it a practical and scalable alternative to existing pooling methods.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Computation and Language