Score: 2

Lizard: An Efficient Linearization Framework for Large Language Models

Published: July 11, 2025 | arXiv ID: 2507.09025v2

By: Chien Van Nguyen , Ruiyi Zhang , Hanieh Deilamsalehy and more

Potential Business Impact:

Lets computers remember more without slowing down.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We propose Lizard, a linearization framework that transforms pretrained Transformer-based Large Language Models (LLMs) into flexible, subquadratic architectures for infinite-context generation. Transformer-based LLMs face significant memory and computational bottlenecks as context lengths increase, due to the quadratic complexity of softmax attention and the growing key-value (KV) cache. Lizard addresses these limitations by introducing a subquadratic attention mechanism that closely approximates softmax attention while preserving the output quality. Unlike previous linearization methods, which are often limited by fixed model structures and therefore exclude gating mechanisms, Lizard incorporates a gating module inspired by recent state-of-the-art linear models. This enables adaptive memory control, supports constant-memory inference, offers strong length generalization, and allows more flexible model design. Lizard combines gated linear attention for global context compression with sliding window attention enhanced by meta memory, forming a hybrid mechanism that captures both long-range dependencies and fine-grained local interactions. Moreover, we introduce a hardware-aware algorithm that accelerates the training speed of our models. Extensive experiments show that Lizard achieves near-lossless recovery of the teacher model's performance across standard language modeling tasks, while significantly outperforming previous linearization methods. On the 5-shot MMLU benchmark, Lizard improves over prior models by 18 points and shows significant improvements on associative recall tasks.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Computation and Language