Score: 3

Cross-Tokenizer Likelihood Scoring Algorithms for Language Model Distillation

Published: December 16, 2025 | arXiv ID: 2512.14954v1

By: Buu Phan, Ashish Khisti, Karen Ullrich

BigTech Affiliations: Meta

Potential Business Impact:

Lets AI models with different word lists talk.

Business Areas:
Text Analytics Data and Analytics, Software

Computing next-token likelihood ratios between two language models (LMs) is a standard task in training paradigms such as knowledge distillation. Since this requires both models to share the same probability space, it becomes challenging when the teacher and student LMs use different tokenizers, for instance, when edge-device deployment necessitates a smaller vocabulary size to lower memory overhead. In this work, we address this vocabulary misalignment problem by uncovering an implicit recursive structure in the commonly deployed Byte-Pair Encoding (BPE) algorithm and utilizing it to create a probabilistic framework for cross-tokenizer likelihood scoring. Our method enables sequence likelihood evaluation for vocabularies different from the teacher model native tokenizer, addressing two specific scenarios: when the student vocabulary is a subset of the teacher vocabulary, and the general case where it is arbitrary. In the subset regime, our framework computes exact likelihoods and provides next-token probabilities for sequential sampling with only O(1) model evaluations per token. When used for distillation, this yields up to a 12% reduction in memory footprint for the Qwen2.5-1.5B model while also improving baseline performance up to 4% on the evaluated tasks. For the general case, we introduce a rigorous lossless procedure that leverages BPE recursive structure, complemented by a fast approximation that keeps large-vocabulary settings practical. Applied to distillation for mathematical reasoning, our approach improves GSM8K accuracy by more than 2% over the current state of the art.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡¦ United States, Canada

Page Count
18 pages

Category
Computer Science:
Computation and Language