Flexible Language Modeling in Continuous Space with Transformer-based Autoregressive Flows
By: Ruixiang Zhang , Shuangfei Zhai , Jiatao Gu and more
Potential Business Impact:
Lets computers understand words better, faster, and more flexibly.
Autoregressive models have driven remarkable progress in language modeling. Their foundational reliance on discrete tokens, unidirectional context, and single-pass decoding, while central to their success, also inspires the exploration of a design space that could offer new axes of modeling flexibility. In this work, we explore an alternative paradigm, shifting language modeling from a discrete token space to a continuous latent space. We propose a novel framework TarFlowLM, that employs transformer-based autoregressive normalizing flows to model these continuous representations. This approach unlocks substantial flexibility, enabling the construction of models that can capture global bi-directional context through stacked, alternating-direction autoregressive transformations, support block-wise generation with flexible token patch sizes, and facilitate a hierarchical multi-pass generation process. We further propose new mixture-based coupling transformations designed to capture complex dependencies within the latent space shaped by discrete data, and demonstrate theoretical connections to conventional discrete autoregressive models. Extensive experiments on language modeling benchmarks demonstrate strong likelihood performance and highlight the flexible modeling capabilities inherent in our framework.
Similar Papers
Language Models Are Implicitly Continuous
Computation and Language
Computers see sentences as smooth, flowing ideas.
Fast Autoregressive Models for Continuous Latent Generation
CV and Pattern Recognition
Makes computers draw realistic pictures much faster.
Token Maturation: Autoregressive Language Generation via Continuous Token Dynamics
Computation and Language
Makes AI write better by thinking longer first.