Exposing LLM User Privacy via Traffic Fingerprint Analysis: A Study of Privacy Risks in LLM Agent Interactions
By: Yixiang Zhang , Xinhao Deng , Zhongyi Gu and more
Potential Business Impact:
Shows how AI actions can be seen in internet traffic.
Large Language Models (LLMs) are increasingly deployed as agents that orchestrate tasks and integrate external tools to execute complex workflows. We demonstrate that these interactive behaviors leave distinctive fingerprints in encrypted traffic exchanged between users and LLM agents. By analyzing traffic patterns associated with agent workflows and tool invocations, adversaries can infer agent activities, distinguish specific agents, and even profile sensitive user attributes. To highlight this risk, we develop AgentPrint, which achieves an F1-score of 0.866 in agent identification and attains 73.9% and 69.1% top-3 accuracy in user attribute inference for simulated- and real-user settings, respectively. These results uncover an overlooked risk: the very interactivity that empowers LLM agents also exposes user privacy, underscoring the urgent need for technical countermeasures alongside regulatory and policy safeguards.
Similar Papers
User Behavior Analysis in Privacy Protection with Large Language Models: A Study on Privacy Preferences with Limited Data
Cryptography and Security
Protects your online secrets with less data.
Redefining Website Fingerprinting Attacks With Multiagent LLMs
Cryptography and Security
Lets computers guess websites people visit.
A Survey on Privacy Risks and Protection in Large Language Models
Cryptography and Security
Keeps your secrets safe from smart computer programs.