Learning to Compress: Unlocking the Potential of Large Language Models for Text Representation
By: Yeqin Zhang , Yizheng Zhao , Chen Hu and more
Potential Business Impact:
Makes computers understand writing better for searching.
Text representation plays a critical role in tasks like clustering, retrieval, and other downstream applications. With the emergence of large language models (LLMs), there is increasing interest in harnessing their capabilities for this purpose. However, most of the LLMs are inherently causal and optimized for next-token prediction, making them suboptimal for producing holistic representations. To address this, recent studies introduced pretext tasks to adapt LLMs for text representation. Most of these tasks, however, rely on token-level prediction objectives, such as the masked next-token prediction (MNTP) used in LLM2Vec. In this work, we explore the untapped potential of context compression as a pretext task for unsupervised adaptation of LLMs. During compression pre-training, the model learns to generate compact memory tokens, which substitute the whole context for downstream sequence prediction. Experiments demonstrate that a well-designed compression objective can significantly enhance LLM-based text representations, outperforming models trained with token-level pretext tasks. Further improvements through contrastive learning produce a strong representation model (LLM2Comp) that outperforms contemporary LLM-based text encoders on a wide range of tasks while being more sample-efficient, requiring significantly less training data.
Similar Papers
Sentence-Anchored Gist Compression for Long-Context LLMs
Computation and Language
Makes computers understand longer stories with less effort.
Lossless Compression of Large Language Model-Generated Text via Next-Token Prediction
Machine Learning (CS)
Makes computer text smaller without losing information.
CompLLM: Compression for Long Context Q&A
Computation and Language
Makes AI understand long texts much faster.