UniModel: A Visual-Only Framework for Unified Multimodal Understanding and Generation
By: Chi Zhang , Jiepeng Wang , Youming Wang and more
Potential Business Impact:
Makes computers see and create pictures from words.
We present UniModel, a unified generative model that jointly supports visual understanding and visual generation within a single pixel-to-pixel diffusion framework. Our goal is to achieve unification along three axes: the model, the tasks, and the representations. At the representation level, we eliminate modality discrepancies by mapping both text and images into a shared visual space: textual prompts are rendered as painted text images on a clean canvas, and all inputs and outputs are treated purely as RGB pixels. This yields a fully vision-native formulation of multimodal learning. At the task level, a broad range of vision-language problems are cast as pixel-to-pixel transformations in this visual space. For understanding tasks, the model takes an RGB image and produces a painted text image that visually encodes the semantic prediction. For generation tasks, painted text images serve as visual conditions that guide realistic and semantically aligned image synthesis. Captioning and text-to-image generation thus become different directions of the same underlying visual translation process. At the model level, we instantiate a single Unified Diffusion Transformer trained with rectified flow in pixel space. A shared backbone jointly learns bidirectional mappings between natural images and painted text images, with lightweight task embeddings to specify the desired direction. Experiments on text-to-image synthesis and image-to-text understanding demonstrate strong cross-modal alignment and emergent controllability such as cycle-consistent image-caption-image loops. Our initial exploration suggests that unifying model, tasks, and representations in a single visual space is a promising paradigm for general-purpose multimodal intelligence.
Similar Papers
UniFusion: Vision-Language Model as Unified Encoder in Image Generation
CV and Pattern Recognition
Makes pictures match words better for editing.
UniVideo: Unified Understanding, Generation, and Editing for Videos
CV and Pattern Recognition
Makes videos from words, pictures, and edits them.
UniToken: Harmonizing Multimodal Understanding and Generation through Unified Visual Encoding
CV and Pattern Recognition
Lets computers understand and create pictures.