Score: 1

Enhancing Latent Computation in Transformers with Latent Tokens

Published: May 19, 2025 | arXiv ID: 2505.12629v1

By: Yuchang Sun , Yanxi Chen , Yaliang Li and more

BigTech Affiliations: Alibaba

Potential Business Impact:

Makes AI smarter at understanding new things.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Augmenting large language models (LLMs) with auxiliary tokens has emerged as a promising strategy for enhancing model performance. In this work, we introduce a lightweight method termed latent tokens; these are dummy tokens that may be non-interpretable in natural language but steer the autoregressive decoding process of a Transformer-based LLM via the attention mechanism. The proposed latent tokens can be seamlessly integrated with a pre-trained Transformer, trained in a parameter-efficient manner, and applied flexibly at inference time, while adding minimal complexity overhead to the existing infrastructure of standard Transformers. We propose several hypotheses about the underlying mechanisms of latent tokens and design synthetic tasks accordingly to verify them. Numerical results confirm that the proposed method noticeably outperforms the baselines, particularly in the out-of-distribution generalization scenarios, highlighting its potential in improving the adaptability of LLMs.

Country of Origin
🇨🇳 China

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)