Uncertainty in Semantic Language Modeling with PIXELS
By: Stefania Radu, Marco Zullich, Matias Valdenegro-Toro
Potential Business Impact:
Helps computers understand text better, even with errors.
Pixel-based language models aim to solve the vocabulary bottleneck problem in language modeling, but the challenge of uncertainty quantification remains open. The novelty of this work consists of analysing uncertainty and confidence in pixel-based language models across 18 languages and 7 scripts, all part of 3 semantically challenging tasks. This is achieved through several methods such as Monte Carlo Dropout, Transformer Attention, and Ensemble Learning. The results suggest that pixel-based models underestimate uncertainty when reconstructing patches. The uncertainty is also influenced by the script, with Latin languages displaying lower uncertainty. The findings on ensemble learning show better performance when applying hyperparameter tuning during the named entity recognition and question-answering tasks across 16 languages.
Similar Papers
Overcoming Vocabulary Constraints with Pixel-level Fallback
Computation and Language
Helps computers understand any language, even new ones.
Are vision language models robust to uncertain inputs?
CV and Pattern Recognition
Makes AI admit when it doesn't know.
Know What You do Not Know: Verbalized Uncertainty Estimation Robustness on Corrupted Images in Vision-Language Models
CV and Pattern Recognition
Helps AI know when it's wrong.