Unified Diffusion Transformer for High-fidelity Text-Aware Image Restoration
By: Jin Hyeon Kim , Paul Hyunbin Cho , Claire Kim and more
Potential Business Impact:
Fixes blurry text in pictures, making it readable.
Text-Aware Image Restoration (TAIR) aims to recover high-quality images from low-quality inputs containing degraded textual content. While diffusion models provide strong generative priors for general image restoration, they often produce text hallucinations in text-centric tasks due to the absence of explicit linguistic knowledge. To address this, we propose UniT, a unified text restoration framework that integrates a Diffusion Transformer (DiT), a Vision-Language Model (VLM), and a Text Spotting Module (TSM) in an iterative fashion for high-fidelity text restoration. In UniT, the VLM extracts textual content from degraded images to provide explicit textual guidance. Simultaneously, the TSM, trained on diffusion features, generates intermediate OCR predictions at each denoising step, enabling the VLM to iteratively refine its guidance during the denoising process. Finally, the DiT backbone, leveraging its strong representational power, exploit these cues to recover fine-grained textual content while effectively suppressing text hallucinations. Experiments on the SA-Text and Real-Text benchmarks demonstrate that UniT faithfully reconstructs degraded text, substantially reduces hallucinations, and achieves state-of-the-art end-to-end F1-score performance in TAIR task.
Similar Papers
Text-Aware Image Restoration with Diffusion Models
CV and Pattern Recognition
Fixes blurry pictures so you can read the words.
Towards Unified Semantic and Controllable Image Fusion: A Diffusion Transformer Approach
CV and Pattern Recognition
Combines pictures using words to make better images.
DiT-Air: Revisiting the Efficiency of Diffusion Model Architecture Design in Text to Image Generation
CV and Pattern Recognition
Makes computers create amazing pictures from words.