Guiding Perception-Reasoning Closer to Human in Blind Image Quality Assessment
By: Yuan Li , Yahan Yu , Youyuan Lin and more
Humans assess image quality through a perception-reasoning cascade, integrating sensory cues with implicit reasoning to form self-consistent judgments. In this work, we investigate how a model can acquire both human-like and self-consistent reasoning capability for blind image quality assessment (BIQA). We first collect human evaluation data that capture several aspects of human perception-reasoning pipeline. Then, we adopt reinforcement learning, using human annotations as reward signals to guide the model toward human-like perception and reasoning. To enable the model to internalize self-consistent reasoning capability, we design a reward that drives the model to infer the image quality purely from self-generated descriptions. Empirically, our approach achieves score prediction performance comparable to state-of-the-art BIQA systems under general metrics, including Pearson and Spearman correlation coefficients. In addition to the rating score, we assess human-model alignment using ROUGE-1 to measure the similarity between model-generated and human perception-reasoning chains. On over 1,000 human-annotated samples, our model reaches a ROUGE-1 score of 0.512 (cf. 0.443 for baseline), indicating substantial coverage of human explanations and marking a step toward human-like interpretable reasoning in BIQA.
Similar Papers
Building Reasonable Inference for Vision-Language Models in Blind Image Quality Assessment
CV and Pattern Recognition
Makes AI judge picture quality more like people.
Reasoning as Representation: Rethinking Visual Reinforcement Learning in Image Quality Assessment
CV and Pattern Recognition
Makes picture quality checks faster and smarter.
Guiding the Inner Eye: A Framework for Hierarchical and Flexible Visual Grounded Reasoning
CV and Pattern Recognition
Helps AI "see" and "think" about pictures better.