Know What You do Not Know: Verbalized Uncertainty Estimation Robustness on Corrupted Images in Vision-Language Models
By: Mirko Borszukovszki, Ivo Pascal de Jong, Matias Valdenegro-Toro
Potential Business Impact:
Helps AI know when it's wrong.
To leverage the full potential of Large Language Models (LLMs) it is crucial to have some information on their answers' uncertainty. This means that the model has to be able to quantify how certain it is in the correctness of a given response. Bad uncertainty estimates can lead to overconfident wrong answers undermining trust in these models. Quite a lot of research has been done on language models that work with text inputs and provide text outputs. Still, since the visual capabilities have been added to these models recently, there has not been much progress on the uncertainty of Visual Language Models (VLMs). We tested three state-of-the-art VLMs on corrupted image data. We found that the severity of the corruption negatively impacted the models' ability to estimate their uncertainty and the models also showed overconfidence in most of the experiments.
Similar Papers
Are vision language models robust to uncertain inputs?
CV and Pattern Recognition
Makes AI admit when it doesn't know.
Analysing the Robustness of Vision-Language-Models to Common Corruptions
CV and Pattern Recognition
Makes AI understand pictures even when they're messy.
Evaluating Robustness of Vision-Language Models Under Noisy Conditions
CV and Pattern Recognition
Tests how well AI sees and understands pictures.