CAPTURE: A Benchmark and Evaluation for LVLMs in CAPTCHA Resolving
By: Jianyi Zhang , Ziyin Zhou , Xu Ji and more
Benefiting from strong and efficient multi-modal alignment strategies, Large Visual Language Models (LVLMs) are able to simulate human visual and reasoning capabilities, such as solving CAPTCHAs. However, existing benchmarks based on visual CAPTCHAs still face limitations. Previous studies, when designing benchmarks and datasets, customized them according to their research objectives. Consequently, these benchmarks cannot comprehensively cover all CAPTCHA types. Notably, there is a dearth of dedicated benchmarks for LVLMs. To address this problem, we introduce a novel CAPTCHA benchmark for the first time, named CAPTURE CAPTCHA for Testing Under Real-world Experiments, specifically for LVLMs. Our benchmark encompasses 4 main CAPTCHA types and 25 sub-types from 31 vendors. The diversity enables a multi-dimensional and thorough evaluation of LVLM performance. CAPTURE features extensive class variety, large-scale data, and unique LVLM-tailored labels, filling the gaps in previous research in terms of data comprehensiveness and labeling pertinence. When evaluated by this benchmark, current LVLMs demonstrate poor performance in solving CAPTCHAs.
Similar Papers
MCA-Bench: A Multimodal Benchmark for Evaluating CAPTCHA Robustness Against VLM-based Attacks
CV and Pattern Recognition
Tests website security puzzles against smart computer attacks.
COGNITION: From Evaluation to Defense against Multimodal LLM CAPTCHA Solvers
Cryptography and Security
AI can now solve many online puzzles.
COGNITION: From Evaluation to Defense against Multimodal LLM CAPTCHA Solvers
Cryptography and Security
AI can now solve many "prove you're human" tests.