Enhancing Robustness of Autoregressive Language Models against Orthographic Attacks via Pixel-based Approach
By: Han Yang , Jian Lan , Yihong Liu and more
Potential Business Impact:
Makes computers understand messy writing, even foreign words.
Autoregressive language models are vulnerable to orthographic attacks, where input text is perturbed with characters from multilingual alphabets, leading to substantial performance degradation. This vulnerability primarily stems from the out-of-vocabulary issue inherent in subword tokenizers and their embeddings. To address this limitation, we propose a pixel-based generative language model that replaces the text-based embeddings with pixel-based representations by rendering words as individual images. This design provides stronger robustness to noisy inputs, while an extension of compatibility to multilingual text across diverse writing systems. We evaluate the proposed method on the multilingual LAMBADA dataset, WMT24 dataset and the SST-2 benchmark, demonstrating both its resilience to orthographic noise and its effectiveness in multilingual settings.
Similar Papers
Overcoming Vocabulary Constraints with Pixel-level Fallback
Computation and Language
Helps computers understand any language, even new ones.
Autoregressive Images Watermarking through Lexical Biasing: An Approach Resistant to Regeneration Attack
Cryptography and Security
Marks AI-made pictures so they can't be faked.
Uncertainty in Semantic Language Modeling with PIXELS
Computation and Language
Helps computers understand text better, even with errors.