Exploring the Latent Capacity of LLMs for One-Step Text Generation
By: Gleb Mezentsev, Ivan Oseledets
Potential Business Impact:
Computers can write long stories from a few clues.
A recent study showed that large language models (LLMs) can reconstruct surprisingly long texts - up to thousands of tokens - via autoregressive generation from just one specially trained input embedding. In this work, we explore whether such reconstruction is possible without autoregression. We show that frozen LLMs can generate hundreds of accurate tokens in just one forward pass, when provided with only two learned embeddings. This reveals a surprising and underexplored capability of LLMs - multi-token generation without iterative decoding. We investigate the behaviour of these embeddings and provide insight into the type of information they encode. We also empirically show that although these representations are not unique for a given text, they form connected and local regions in embedding space - a property that suggests the potential of learning a dedicated encoder into that space.
Similar Papers
Memory Tokens: Large Language Models Can Generate Reversible Sentence Embeddings
Computation and Language
Lets computers remember and perfectly repeat any text.
Let's Predict Sentence by Sentence
Computation and Language
Computers learn to think in ideas, not just words.
Your LLM Knows the Future: Uncovering Its Multi-Token Prediction Potential
Computation and Language
Makes computers write faster by guessing ahead.