Score: 2

UARE: A Unified Vision-Language Model for Image Quality Assessment, Restoration, and Enhancement

Published: December 7, 2025 | arXiv ID: 2512.06750v1

By: Weiqi Li , Xuanyu Zhang , Bin Chen and more

BigTech Affiliations: ByteDance

Potential Business Impact:

Improves blurry pictures by understanding their quality.

Business Areas:
Image Recognition Data and Analytics, Software

Image quality assessment (IQA) and image restoration are fundamental problems in low-level vision. Although IQA and restoration are closely connected conceptually, most existing work treats them in isolation. Recent advances in unified multimodal understanding-generation models demonstrate promising results and indicate that stronger understanding can improve generative performance. This motivates a single model that unifies IQA and restoration and explicitly studies how IQA can guide restoration, a setting that remains largely underexplored yet highly valuable. In this paper, we propose UARE, to our knowledge the first Unified vision-language model for image quality Assessment, Restoration, and Enhancement. Built on pretrained unified understanding and generation models, we introduce a two-stage training framework. First, a progressive, easy-to-hard schedule expands from single-type distortions to higher-order mixed degradations, enabling UARE to handle multiple degradations. Second, we perform unified fine-tuning of quality understanding and restoration with interleaved text-image data, aligning IQA signals with restoration objectives. Through multi-task co-training, UARE leverages IQA to boost restoration and enhancement performance. Extensive experiments across IQA, restoration, and enhancement tasks demonstrate the effectiveness of UARE. The code and models will be available at https://github.com/lwq20020127/UARE.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
CV and Pattern Recognition