POT: Inducing Overthinking in LLMs via Black-Box Iterative Optimization
By: Xinyu Li , Tianjin Huang , Ronghui Mu and more
Potential Business Impact:
Makes AI think too much, wasting computer power.
Recent advances in Chain-of-Thought (CoT) prompting have substantially enhanced the reasoning capabilities of large language models (LLMs), enabling sophisticated problem-solving through explicit multi-step reasoning traces. However, these enhanced reasoning processes introduce novel attack surfaces, particularly vulnerabilities to computational inefficiency through unnecessarily verbose reasoning chains that consume excessive resources without corresponding performance gains. Prior overthinking attacks typically require restrictive conditions including access to external knowledge sources for data poisoning, reliance on retrievable poisoned content, and structurally obvious templates that limit practical applicability in real-world scenarios. To address these limitations, we propose POT (Prompt-Only OverThinking), a novel black-box attack framework that employs LLM-based iterative optimization to generate covert and semantically natural adversarial prompts, eliminating dependence on external data access and model retrieval. Extensive experiments across diverse model architectures and datasets demonstrate that POT achieves superior performance compared to other methods.
Similar Papers
BadThink: Triggered Overthinking Attacks on Chain-of-Thought Reasoning in Large Language Models
Cryptography and Security
Makes AI "overthink" to slow it down.
Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching
Computation and Language
Makes smart computers think faster, using fewer words.
Eliciting Chain-of-Thought in Base LLMs via Gradient-Based Representation Optimization
Computation and Language
Teaches computers to think step-by-step to solve problems.