Decoding-Free Sampling Strategies for LLM Marginalization
By: David Pohl , Marco Cognetta , Junyoung Lee and more
Potential Business Impact:
Makes AI understand words better, faster, and cheaper.
Modern language models operate on subword-tokenized text in order to make a trade-off between model size, inference speed, and vocabulary coverage. A side effect of this is that, during inference, models are evaluated by measuring the probability of only the specific tokenization produced as the output, despite there being many possible ways to represent the same text with a subword vocabulary. Recent studies have argued instead for evaluating LLMs by marginalization - the probability mass of all tokenizations of a given text. Marginalization is difficult due to the number of possible tokenizations of a text, so often approximate marginalization is done via sampling. However, a downside of sampling is that an expensive generation step must be performed by the LLM for each sample, which limits the number of samples that can be acquired given a runtime budget, and therefore also the accuracy of the approximation. Since computing the probability of a sequence given the tokenization is relatively cheap compared to actually generating it, we investigate sampling strategies that are decoding-free - they require no generation from the LLM, instead relying entirely on extremely cheap sampling strategies that are model and tokenizer agnostic. We investigate the approximation quality and speed of decoding-free sampling strategies for a number of open models to find that they provide sufficiently accurate marginal estimates at a small fraction of the runtime cost and demonstrate its use on a set of downstream inference tasks.
Similar Papers
A Comparative Study of Decoding Strategies in Medical Text Generation
Computation and Language
Improves AI's medical answers by choosing the best words.
Inferring from Logits: Exploring Best Practices for Decoding-Free Generative Candidate Selection
Computation and Language
Helps AI choose the best answer faster.
Decoding Uncertainty: The Impact of Decoding Strategies for Uncertainty Estimation in Large Language Models
Computation and Language
Makes AI guess better and know when it's unsure.