Score: 1

Autoregressive Styled Text Image Generation, but Make it Reliable

Published: October 27, 2025 | arXiv ID: 2510.23240v1

By: Carmine Zaccagnino , Fabio Quattrini , Vittorio Pippi and more

Potential Business Impact:

Makes computers write text that looks like real handwriting.

Business Areas:
Text Analytics Data and Analytics, Software

Generating faithful and readable styled text images (especially for Styled Handwritten Text generation - HTG) is an open problem with several possible applications across graphic design, document understanding, and image editing. A lot of research effort in this task is dedicated to developing strategies that reproduce the stylistic characteristics of a given writer, with promising results in terms of style fidelity and generalization achieved by the recently proposed Autoregressive Transformer paradigm for HTG. However, this method requires additional inputs, lacks a proper stop mechanism, and might end up in repetition loops, generating visual artifacts. In this work, we rethink the autoregressive formulation by framing HTG as a multimodal prompt-conditioned generation task, and tackle the content controllability issues by introducing special textual input tokens for better alignment with the visual ones. Moreover, we devise a Classifier-Free-Guidance-based strategy for our autoregressive model. Through extensive experimental validation, we demonstrate that our approach, dubbed Eruku, compared to previous solutions requires fewer inputs, generalizes better to unseen styles, and follows more faithfully the textual prompt, improving content adherence.

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition