AU-IQA: A Benchmark Dataset for Perceptual Quality Assessment of AI-Enhanced User-Generated Content
By: Shushi Wang , Chunyi Li , Zicheng Zhang and more
Potential Business Impact:
Helps computers judge if AI-improved pictures look good.
AI-based image enhancement techniques have been widely adopted in various visual applications, significantly improving the perceptual quality of user-generated content (UGC). However, the lack of specialized quality assessment models has become a significant limiting factor in this field, limiting user experience and hindering the advancement of enhancement methods. While perceptual quality assessment methods have shown strong performance on UGC and AIGC individually, their effectiveness on AI-enhanced UGC (AI-UGC) which blends features from both, remains largely unexplored. To address this gap, we construct AU-IQA, a benchmark dataset comprising 4,800 AI-UGC images produced by three representative enhancement types which include super-resolution, low-light enhancement, and denoising. On this dataset, we further evaluate a range of existing quality assessment models, including traditional IQA methods and large multimodal models. Finally, we provide a comprehensive analysis of how well current approaches perform in assessing the perceptual quality of AI-UGC. The access link to the AU-IQA is https://github.com/WNNGGU/AU-IQA-Dataset.
Similar Papers
AU-IQA: A Benchmark Dataset for Perceptual Quality Assessment of AI-Enhanced User-Generated Content
CV and Pattern Recognition
Helps AI know if pictures look good.
ViDA-UGC: Detailed Image Quality Analysis via Visual Distortion Assessment for UGC Images
CV and Pattern Recognition
Makes AI better at judging photo quality.
Towards Explainable Partial-AIGC Image Quality Assessment
CV and Pattern Recognition
Helps check if AI-edited pictures look real.