Replicating TEMPEST at Scale: Multi-Turn Adversarial Attacks Against Trillion-Parameter Frontier Models
By: Richard Young
Potential Business Impact:
Makes AI safer from bad instructions.
Despite substantial investment in safety alignment, the vulnerability of large language models to sophisticated multi-turn adversarial attacks remains poorly characterized, and whether model scale or inference mode affects robustness is unknown. This study employed the TEMPEST multi-turn attack framework to evaluate ten frontier models from eight vendors across 1,000 harmful behaviors, generating over 97,000 API queries across adversarial conversations with automated evaluation by independent safety classifiers. Results demonstrated a spectrum of vulnerability: six models achieved 96% to 100% attack success rate (ASR), while four showed meaningful resistance, with ASR ranging from 42% to 78%; enabling extended reasoning on identical architecture reduced ASR from 97% to 42%. These findings indicate that safety alignment quality varies substantially across vendors, that model scale does not predict adversarial robustness, and that thinking mode provides a deployable safety enhancement. Collectively, this work establishes that current alignment techniques remain fundamentally vulnerable to adaptive multi-turn attacks regardless of model scale, while identifying deliberative inference as a promising defense direction.
Similar Papers
Tree-based Dialogue Reinforced Policy Optimization for Red-Teaming Attacks
Machine Learning (CS)
Finds new ways to trick AI in conversations.
Tempest: Autonomous Multi-Turn Jailbreaking of Large Language Models with Tree Search
Artificial Intelligence
Finds ways to trick AI with many questions.
SafeTy Reasoning Elicitation Alignment for Multi-Turn Dialogues
Computation and Language
Stops bad guys tricking smart computers with talking.