Lizard: An Efficient Linearization Framework for Large Language Models
By: Chien Van Nguyen , Ruiyi Zhang , Hanieh Deilamsalehy and more
Potential Business Impact:
Lets computers remember more without slowing down.
We propose Lizard, a linearization framework that transforms pretrained Transformer-based Large Language Models (LLMs) into flexible, subquadratic architectures for infinite-context generation. Transformer-based LLMs face significant memory and computational bottlenecks as context lengths increase, due to the quadratic complexity of softmax attention and the growing key-value (KV) cache. Lizard addresses these limitations by introducing a subquadratic attention mechanism that closely approximates softmax attention while preserving the output quality. Unlike previous linearization methods, which are often limited by fixed model structures and therefore exclude gating mechanisms, Lizard incorporates a gating module inspired by recent state-of-the-art linear models. This enables adaptive memory control, supports constant-memory inference, offers strong length generalization, and allows more flexible model design. Lizard combines gated linear attention for global context compression with sliding window attention enhanced by meta memory, forming a hybrid mechanism that captures both long-range dependencies and fine-grained local interactions. Moreover, we introduce a hardware-aware algorithm that accelerates the training speed of our models. Extensive experiments show that Lizard achieves near-lossless recovery of the teacher model's performance across standard language modeling tasks, while significantly outperforming previous linearization methods. On the 5-shot MMLU benchmark, Lizard improves over prior models by 18 points and shows significant improvements on associative recall tasks.
Similar Papers
Liger: Linearizing Large Language Models to Gated Recurrent Structures
Computation and Language
Makes big computer brains run much faster.
Gecko: An Efficient Neural Architecture Inherently Processing Sequences with Arbitrary Lengths
Machine Learning (CS)
Lets computers remember much longer stories.
Zebra-Llama: Towards Extremely Efficient Hybrid Models
Machine Learning (CS)
Makes smart computer programs run faster and cheaper.