Same Question, Different Words: A Latent Adversarial Framework for Prompt Robustness
By: Tingchen Fu, Fazl Barez
Potential Business Impact:
Makes AI understand questions asked in different ways.
Insensitivity to semantically-preserving variations of prompts (paraphrases) is crucial for reliable behavior and real-world deployment of large language models. However, language models exhibit significant performance degradation when faced with semantically equivalent but differently phrased prompts, and existing solutions either depend on trial-and-error prompt engineering or require computationally expensive inference-time algorithms. In this study, built on the key insight that worst-case prompts exhibit a drift in embedding space, we present Latent Adversarial Paraphrasing (LAP), a dual-loop adversarial framework: the inner loop trains a learnable perturbation to serve as a "latent continuous paraphrase" while preserving semantics through Lagrangian regulation, and the outer loop optimizes the language model parameters on these perturbations. We conduct extensive experiments to demonstrate the effectiveness of LAP across multiple LLM architectures on the RobustAlpaca benchmark with a 0.5%-4% absolution improvement on worst-case win-rate compared with vanilla supervised fine-tuning.
Similar Papers
LatentPrompt: Optimizing Promts in Latent Space
Computation and Language
Makes AI understand jobs better, automatically.
Say It Another Way: Auditing LLMs with a User-Grounded Automated Paraphrasing Framework
Computation and Language
Tests AI better by changing questions naturally.
Anti-adversarial Learning: Desensitizing Prompts for Large Language Models
Computation and Language
Keeps your private words secret from AI.