Ovis-Image Technical Report
By: Guo-Hua Wang , Liangfu Cao , Tianyu Cui and more
Potential Business Impact:
Creates clear text in pictures, even on one computer.
We introduce $\textbf{Ovis-Image}$, a 7B text-to-image model specifically optimized for high-quality text rendering, designed to operate efficiently under stringent computational constraints. Built upon our previous Ovis-U1 framework, Ovis-Image integrates a diffusion-based visual decoder with the stronger Ovis 2.5 multimodal backbone, leveraging a text-centric training pipeline that combines large-scale pre-training with carefully tailored post-training refinements. Despite its compact architecture, Ovis-Image achieves text rendering performance on par with significantly larger open models such as Qwen-Image and approaches closed-source systems like Seedream and GPT4o. Crucially, the model remains deployable on a single high-end GPU with moderate memory, narrowing the gap between frontier-level text rendering and practical deployment. Our results indicate that combining a strong multimodal backbone with a carefully designed, text-focused training recipe is sufficient to achieve reliable bilingual text rendering without resorting to oversized or proprietary models.
Similar Papers
Ovis2.5 Technical Report
CV and Pattern Recognition
Helps computers understand complex pictures and charts.
Training-Free Diffusion Priors for Text-to-Image Generation via Optimization-based Visual Inversion
CV and Pattern Recognition
Makes AI create better pictures from words.
Qwen-Image Technical Report
CV and Pattern Recognition
Creates pictures with perfect words inside them.