Autoregressive Styled Text Image Generation, but Make it Reliable
By: Carmine Zaccagnino , Fabio Quattrini , Vittorio Pippi and more
Potential Business Impact:
Makes computers write text that looks like real handwriting.
Generating faithful and readable styled text images (especially for Styled Handwritten Text generation - HTG) is an open problem with several possible applications across graphic design, document understanding, and image editing. A lot of research effort in this task is dedicated to developing strategies that reproduce the stylistic characteristics of a given writer, with promising results in terms of style fidelity and generalization achieved by the recently proposed Autoregressive Transformer paradigm for HTG. However, this method requires additional inputs, lacks a proper stop mechanism, and might end up in repetition loops, generating visual artifacts. In this work, we rethink the autoregressive formulation by framing HTG as a multimodal prompt-conditioned generation task, and tackle the content controllability issues by introducing special textual input tokens for better alignment with the visual ones. Moreover, we devise a Classifier-Free-Guidance-based strategy for our autoregressive model. Through extensive experimental validation, we demonstrate that our approach, dubbed Eruku, compared to previous solutions requires fewer inputs, generalizes better to unseen styles, and follows more faithfully the textual prompt, improving content adherence.
Similar Papers
Zero-Shot Styled Text Image Generation, but Make It Autoregressive
CV and Pattern Recognition
Makes computers write in any new handwriting.
Quo Vadis Handwritten Text Generation for Handwritten Text Recognition?
CV and Pattern Recognition
Makes old handwriting easier for computers to read.
Personalized Text-to-Image Generation with Auto-Regressive Models
CV and Pattern Recognition
Makes AI draw pictures of *your* stuff.