Negation-Aware Test-Time Adaptation for Vision-Language Models
By: Haochen Han , Alex Jinpeng Wang , Fangming Liu and more
Potential Business Impact:
Helps computers understand what is NOT in pictures.
In this paper, we study a practical but less-touched problem in Vision-Language Models (VLMs), \ie, negation understanding. Specifically, many real-world applications require models to explicitly identify what is false or non-existent, \eg, radiologists may search for images that exclude specific conditions. Despite the impressive transferability of VLMs through large-scale training, they suffer from a critical limitation that fails to handle negation. To address this challenge, existing methods attribute its root cause to the scarcity of negation training data and propose to fine-tune VLMs on massive data containing explicit negation. Undoubtedly, such data-centric solutions demand substantial data and computational resources, limiting their sustainable widespread adoption. To tackle negation in a low-carbon manner, we empirically observe that the key obstacle lies in the dual-concept shifts between the affirmation and negation distributions. Therefore, we propose a Negation-Aware Test-Time Adaptation (NEAT) method to efficiently adjust distribution-related parameters during inference. In brief, NEAT can reduce distribution shift in consistent semantics while eliminating false distributional consistency in unrelated semantics. Extensive experiments on the various negation understanding tasks verify the effectiveness of the proposed method. Remarkably, with less than 0.01\% of trainable parameters, NEAT achieves comparable or superior performance to state-of-the-art post-training approaches. Our code is available at https://github.com/hhc1997/NEAT.
Similar Papers
Negation-Aware Test-Time Adaptation for Vision-Language Models
CV and Pattern Recognition
Helps computers understand what is NOT in pictures.
Vision-Language Models Do Not Understand Negation
CV and Pattern Recognition
Teaches computers to understand "not" in pictures.
NegVQA: Can Vision Language Models Understand Negation?
Computation and Language
Helps computers understand "not" in pictures.