Adversarial Lens: Exploiting Attention Layers to Generate Adversarial Examples for Evaluation
By: Kaustubh Dhole
Recent advances in mechanistic interpretability suggest that intermediate attention layers encode token-level hypotheses that are iteratively refined toward the final output. In this work, we exploit this property to generate adversarial examples directly from attention-layer token distributions. Unlike prompt-based or gradient-based attacks, our approach leverages model-internal token predictions, producing perturbations that are both plausible and internally consistent with the model's own generation process. We evaluate whether tokens extracted from intermediate layers can serve as effective adversarial perturbations for downstream evaluation tasks. We conduct experiments on argument quality assessment using the ArgQuality dataset, with LLaMA-3.1-Instruct-8B serving as both the generator and evaluator. Our results show that attention-based adversarial examples lead to measurable drops in evaluation performance while remaining semantically similar to the original inputs. However, we also observe that substitutions drawn from certain layers and token positions can introduce grammatical degradation, limiting their practical effectiveness. Overall, our findings highlight both the promise and current limitations of using intermediate-layer representations as a principled source of adversarial examples for stress-testing LLM-based evaluation pipelines.
Similar Papers
Distilling to Hybrid Attention Models via KL-Guided Layer Selection
Computation and Language
Makes big computer brains faster without retraining.
LayerCake: Token-Aware Contrastive Decoding within Large Language Model Layers
Artificial Intelligence
Makes AI tell the truth more often.
Learning to Focus: Causal Attention Distillation via Gradient-Guided Token Pruning
Computation and Language
Helps AI focus on important information, not distractions.