Score: 1

SALMAN: Stability Analysis of Language Models Through the Maps Between Graph-based Manifolds

Published: August 23, 2025 | arXiv ID: 2508.18306v1

By: Wuxinlin Cheng , Yupeng Cao , Jinwen Wu and more

Potential Business Impact:

Makes smart computer words more trustworthy.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent strides in pretrained transformer-based language models have propelled state-of-the-art performance in numerous NLP tasks. Yet, as these models grow in size and deployment, their robustness under input perturbations becomes an increasingly urgent question. Existing robustness methods often diverge between small-parameter and large-scale models (LLMs), and they typically rely on labor-intensive, sample-specific adversarial designs. In this paper, we propose a unified, local (sample-level) robustness framework (SALMAN) that evaluates model stability without modifying internal parameters or resorting to complex perturbation heuristics. Central to our approach is a novel Distance Mapping Distortion (DMD) measure, which ranks each sample's susceptibility by comparing input-to-output distance mappings in a near-linear complexity manner. By demonstrating significant gains in attack efficiency and robust training, we position our framework as a practical, model-agnostic tool for advancing the reliability of transformer-based NLP systems.

Country of Origin
🇺🇸 United States

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)