Score: 1

XLM: A Python package for non-autoregressive language models

Published: December 18, 2025 | arXiv ID: 2512.17065v1

By: Dhruvesh Patel , Durga Prasad Maram , Sai Sreenivas Chintha and more

Potential Business Impact:

Makes computers write text faster and easier.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In recent years, there has been a resurgence of interest in non-autoregressive text generation in the context of general language modeling. Unlike the well-established autoregressive language modeling paradigm, which has a plethora of standard training and inference libraries, implementations of non-autoregressive language modeling have largely been bespoke making it difficult to perform systematic comparisons of different methods. Moreover, each non-autoregressive language model typically requires it own data collation, loss, and prediction logic, making it challenging to reuse common components. In this work, we present the XLM python package, which is designed to make implementing small non-autoregressive language models faster with a secondary goal of providing a suite of small pre-trained models (through a companion xlm-models package) that can be used by the research community. The code is available at https://github.com/dhruvdcoder/xlm-core.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Computation and Language