Evaluating Apple Intelligence's Writing Tools for Privacy Against Large Language Model-Based Inference Attacks: Insights from Early Datasets
By: Mohd. Farhan Israk Soumik, Syed Mhamudul Hasan, Abdur R. Shahid
Potential Business Impact:
Makes writing tools hide your feelings from computers.
The misuse of Large Language Models (LLMs) to infer emotions from text for malicious purposes, known as emotion inference attacks, poses a significant threat to user privacy. In this paper, we investigate the potential of Apple Intelligence's writing tools, integrated across iPhone, iPad, and MacBook, to mitigate these risks through text modifications such as rewriting and tone adjustment. By developing early novel datasets specifically for this purpose, we empirically assess how different text modifications influence LLM-based detection. This capability suggests strong potential for Apple Intelligence's writing tools as privacy-preserving mechanisms. Our findings lay the groundwork for future adaptive rewriting systems capable of dynamically neutralizing sensitive emotional content to enhance user privacy. To the best of our knowledge, this research provides the first empirical analysis of Apple Intelligence's text-modification tools within a privacy-preservation context with the broader goal of developing on-device, user-centric privacy-preserving mechanisms to protect against LLMs-based advanced inference attacks on deployed systems.
Similar Papers
Beyond PII: How Users Attempt to Estimate and Mitigate Implicit LLM Inference
Human-Computer Interaction
AI can guess your secrets from your words.
Voices of Freelance Professional Writers on AI: Limitations, Expectations, and Fears
Computation and Language
Helps writers use AI for stories in many languages.
SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation
Cryptography and Security
Keeps your private info safe from smart computer programs.