Score: 1

COGNITION: From Evaluation to Defense against Multimodal LLM CAPTCHA Solvers

Published: December 2, 2025 | arXiv ID: 2512.02318v1

By: Junyu Wang , Changjia Zhu , Yuanbo Zhou and more

BigTech Affiliations: Visa

Potential Business Impact:

AI can now solve many "prove you're human" tests.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper studies how multimodal large language models (MLLMs) undermine the security guarantees of visual CAPTCHA. We identify the attack surface where an adversary can cheaply automate CAPTCHA solving using off-the-shelf models. We evaluate 7 leading commercial and open-source MLLMs across 18 real-world CAPTCHA task types, measuring single-shot accuracy, success under limited retries, end-to-end latency, and per-solve cost. We further analyze the impact of task-specific prompt engineering and few-shot demonstrations on solver effectiveness. We reveal that MLLMs can reliably solve recognition-oriented and low-interaction CAPTCHA tasks at human-like cost and latency, whereas tasks requiring fine-grained localization, multi-step spatial reasoning, or cross-frame consistency remain significantly harder for current models. By examining the reasoning traces of such MLLMs, we investigate the underlying mechanisms of why models succeed/fail on specific CAPTCHA puzzles and use these insights to derive defense-oriented guidelines for selecting and strengthening CAPTCHA tasks. We conclude by discussing implications for platform operators deploying CAPTCHA as part of their abuse-mitigation pipeline.Code Availability (https://anonymous.4open.science/r/Captcha-465E/).

Country of Origin
🇺🇸 United States

Page Count
11 pages

Category
Computer Science:
Cryptography and Security