Understanding Pure Textual Reasoning for Blind Image Quality Assessment
By: Yuan Li, Shin'ya Nishida
Potential Business Impact:
Helps computers judge picture quality using words.
Textual reasoning has recently been widely adopted in Blind Image Quality Assessment (BIQA). However, it remains unclear how textual information contributes to quality prediction and to what extent text can represent the score-related image contents. This work addresses these questions from an information-flow perspective by comparing existing BIQA models with three paradigms designed to learn the image-text-score relationship: Chain-of-Thought, Self-Consistency, and Autoencoder. Our experiments show that the score prediction performance of the existing model significantly drops when only textual information is used for prediction. Whereas the Chain-of-Thought paradigm introduces little improvement in BIQA performance, the Self-Consistency paradigm significantly reduces the gap between image- and text-conditioned predictions, narrowing the PLCC/SRCC difference to 0.02/0.03. The Autoencoder-like paradigm is less effective in closing the image-text gap, yet it reveals a direction for further optimization. These findings provide insights into how to improve the textual reasoning for BIQA and high-level vision tasks.
Similar Papers
Guiding Perception-Reasoning Closer to Human in Blind Image Quality Assessment
CV and Pattern Recognition
Teaches computers to judge picture quality like people.
Building Reasonable Inference for Vision-Language Models in Blind Image Quality Assessment
CV and Pattern Recognition
Makes AI judge picture quality more like people.
Reasoning as Representation: Rethinking Visual Reinforcement Learning in Image Quality Assessment
CV and Pattern Recognition
Makes picture quality checks faster and smarter.