LMK > CLS: Landmark Pooling for Dense Embeddings
By: Meet Doshi , Aashka Trivedi , Vishwajeet Kumar and more
Potential Business Impact:
Makes computer searches better for long texts.
Representation learning is central to many downstream tasks such as search, clustering, classification, and reranking. State-of-the-art sequence encoders typically collapse a variable-length token sequence to a single vector using a pooling operator, most commonly a special [CLS] token or mean pooling over token embeddings. In this paper, we identify systematic weaknesses of these pooling strategies: [CLS] tends to concentrate information toward the initial positions of the sequence and can under-represent distributed evidence, while mean pooling can dilute salient local signals, sometimes leading to worse short-context performance. To address these issues, we introduce Landmark (LMK) pooling, which partitions a sequence into chunks, inserts landmark tokens between chunks, and forms the final representation by mean-pooling the landmark token embeddings. This simple mechanism improves long-context extrapolation without sacrificing local salient features, at the cost of introducing a small number of special tokens. We empirically demonstrate that LMK pooling matches existing methods on short-context retrieval tasks and yields substantial improvements on long-context tasks, making it a practical and scalable alternative to existing pooling methods.
Similar Papers
KV-Embedding: Training-free Text Embedding via Internal KV Re-routing in Decoder-only LLMs
Computation and Language
Lets computers understand text better without retraining.
Summaries as Centroids for Interpretable and Scalable Text Clustering
Computation and Language
Makes computer groups of words easy to understand.
Scaling Language-Centric Omnimodal Representation Learning
Computation and Language
Makes computers understand pictures and words better.