Score: 1

The Imitation Game: Using Large Language Models as Chatbots to Combat Chat-Based Cybercrimes

Published: December 24, 2025 | arXiv ID: 2512.21371v1

By: Yifan Yao , Baojuan Wang , Jinhao Duan and more

Potential Business Impact:

Fools scammers by pretending to be a victim.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Chat-based cybercrime has emerged as a pervasive threat, with attackers leveraging real-time messaging platforms to conduct scams that rely on trust-building, deception, and psychological manipulation. Traditional defense mechanisms, which operate on static rules or shallow content filters, struggle to identify these conversational threats, especially when attackers use multimedia obfuscation and context-aware dialogue. In this work, we ask a provocative question inspired by the classic Imitation Game: Can machines convincingly pose as human victims to turn deception against cybercriminals? We present LURE (LLM-based User Response Engagement), the first system to deploy Large Language Models (LLMs) as active agents, not as passive classifiers, embedded within adversarial chat environments. LURE combines automated discovery, adversarial interaction, and OCR-based analysis of image-embedded payment data. Applied to the setting of illicit video chat scams on Telegram, our system engaged 53 actors across 98 groups. In over 56 percent of interactions, the LLM maintained multi-round conversations without being noticed as a bot, effectively "winning" the imitation game. Our findings reveal key behavioral patterns in scam operations, such as payment flows, upselling strategies, and platform migration tactics.

Country of Origin
πŸ‡­πŸ‡° πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ United States, Hong Kong, China

Page Count
14 pages

Category
Computer Science:
Cryptography and Security