Score: 0

Promoting Online Safety by Simulating Unsafe Conversations with LLMs

Published: July 29, 2025 | arXiv ID: 2507.22267v1

By: Owen Hoffman , Kangze Peng , Zehua You and more

Potential Business Impact:

Teaches people to spot fake online chats.

Business Areas:
Simulation Software

Generative AI, including large language models (LLMs) have the potential -- and already are being used -- to increase the speed, scale, and types of unsafe conversations online. LLMs lower the barrier for entry for bad actors to create unsafe conversations in particular because of their ability to generate persuasive and human-like text. In our current work, we explore ways to promote online safety by teaching people about unsafe conversations that can occur online with and without LLMs. We build on prior work that shows that LLMs can successfully simulate scam conversations. We also leverage research in the learning sciences that shows that providing feedback on one's hypothetical actions can promote learning. In particular, we focus on simulating scam conversations using LLMs. Our work incorporates two LLMs that converse with each other to simulate realistic, unsafe conversations that people may encounter online between a scammer LLM and a target LLM but users of our system are asked provide feedback to the target LLM.

Page Count
5 pages

Category
Computer Science:
Human-Computer Interaction