Score: 1

Flexible Language Modeling in Continuous Space with Transformer-based Autoregressive Flows

Published: July 1, 2025 | arXiv ID: 2507.00425v1

By: Ruixiang Zhang , Shuangfei Zhai , Jiatao Gu and more

BigTech Affiliations: Apple

Potential Business Impact:

Lets computers understand words better, faster, and more flexibly.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Autoregressive models have driven remarkable progress in language modeling. Their foundational reliance on discrete tokens, unidirectional context, and single-pass decoding, while central to their success, also inspires the exploration of a design space that could offer new axes of modeling flexibility. In this work, we explore an alternative paradigm, shifting language modeling from a discrete token space to a continuous latent space. We propose a novel framework TarFlowLM, that employs transformer-based autoregressive normalizing flows to model these continuous representations. This approach unlocks substantial flexibility, enabling the construction of models that can capture global bi-directional context through stacked, alternating-direction autoregressive transformations, support block-wise generation with flexible token patch sizes, and facilitate a hierarchical multi-pass generation process. We further propose new mixture-based coupling transformations designed to capture complex dependencies within the latent space shaped by discrete data, and demonstrate theoretical connections to conventional discrete autoregressive models. Extensive experiments on language modeling benchmarks demonstrate strong likelihood performance and highlight the flexible modeling capabilities inherent in our framework.

Country of Origin
🇺🇸 United States

Page Count
43 pages

Category
Computer Science:
Machine Learning (CS)