Score: 0

Enhancing Robustness of Autoregressive Language Models against Orthographic Attacks via Pixel-based Approach

Published: August 28, 2025 | arXiv ID: 2508.21206v1

By: Han Yang , Jian Lan , Yihong Liu and more

Potential Business Impact:

Makes computers understand messy writing, even foreign words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Autoregressive language models are vulnerable to orthographic attacks, where input text is perturbed with characters from multilingual alphabets, leading to substantial performance degradation. This vulnerability primarily stems from the out-of-vocabulary issue inherent in subword tokenizers and their embeddings. To address this limitation, we propose a pixel-based generative language model that replaces the text-based embeddings with pixel-based representations by rendering words as individual images. This design provides stronger robustness to noisy inputs, while an extension of compatibility to multilingual text across diverse writing systems. We evaluate the proposed method on the multilingual LAMBADA dataset, WMT24 dataset and the SST-2 benchmark, demonstrating both its resilience to orthographic noise and its effectiveness in multilingual settings.

Country of Origin
🇩🇪 Germany

Page Count
8 pages

Category
Computer Science:
Computation and Language