The Imitation Game: Using Large Language Models as Chatbots to Combat Chat-Based Cybercrimes
By: Yifan Yao , Baojuan Wang , Jinhao Duan and more
Potential Business Impact:
Fools scammers by pretending to be a victim.
Chat-based cybercrime has emerged as a pervasive threat, with attackers leveraging real-time messaging platforms to conduct scams that rely on trust-building, deception, and psychological manipulation. Traditional defense mechanisms, which operate on static rules or shallow content filters, struggle to identify these conversational threats, especially when attackers use multimedia obfuscation and context-aware dialogue. In this work, we ask a provocative question inspired by the classic Imitation Game: Can machines convincingly pose as human victims to turn deception against cybercriminals? We present LURE (LLM-based User Response Engagement), the first system to deploy Large Language Models (LLMs) as active agents, not as passive classifiers, embedded within adversarial chat environments. LURE combines automated discovery, adversarial interaction, and OCR-based analysis of image-embedded payment data. Applied to the setting of illicit video chat scams on Telegram, our system engaged 53 actors across 98 groups. In over 56 percent of interactions, the LLM maintained multi-round conversations without being noticed as a bot, effectively "winning" the imitation game. Our findings reveal key behavioral patterns in scam operations, such as payment flows, upselling strategies, and platform migration tactics.
Similar Papers
Love, Lies, and Language Models: Investigating AI's Role in Romance-Baiting Scams
Cryptography and Security
Scammers use AI to trick people into losing money.
Love, Lies, and Language Models: Investigating AI's Role in Romance-Baiting Scams
Cryptography and Security
AI helps scammers trick people into losing money.
Send to which account? Evaluation of an LLM-based Scambaiting System
Cryptography and Security
Catches scammers by talking to them.