Personalizing Agent Privacy Decisions via Logical Entailment
By: James Flemings , Ren Yi , Octavian Suciu and more
Potential Business Impact:
Keeps your personal data private from AI assistants.
Personal language model-based agents are becoming more widespread for completing tasks on behalf of users; however, this raises serious privacy questions regarding whether these models will appropriately disclose user data. While prior work has evaluated language models on data-sharing scenarios based on general privacy norms, we focus on personalizing language models' privacy decisions, grounding their judgments directly in prior user privacy decisions. Our findings suggest that general privacy norms are insufficient for effective personalization of privacy decisions. Furthermore, we find that eliciting privacy judgments from the model through In-context Learning (ICL) is unreliable to due misalignment with the user's prior privacy judgments and opaque reasoning traces, which make it difficult for the user to interpret the reasoning behind the model's decisions. To address these limitations, we propose ARIEL (Agentic Reasoning with Individualized Entailment Logic), a framework that jointly leverages a language model and rule-based logic for structured data-sharing reasoning. ARIEL is based on formulating personalization of data sharing as an entailment, whether a prior user judgment on a data-sharing request implies the same judgment for an incoming request. Our experimental evaluations on advanced models and publicly-available datasets demonstrate that ARIEL can reduce the F1 score error by $\textbf{39.1%}$ over language model-based reasoning (ICL), demonstrating that ARIEL is effective at correctly judging requests where the user would approve data sharing. Overall, our findings suggest that combining LLMs with strict logical entailment is a highly effective strategy for enabling personalized privacy judgments for agents.
Similar Papers
The Personalization Paradox: Semantic Loss vs. Reasoning Gains in Agentic AI Q&A
Information Retrieval
Makes AI tutors give better, personalized advice.
Can LLMs Make (Personalized) Access Control Decisions?
Cryptography and Security
AI helps apps decide who sees your data.
An LLM-enabled semantic-centric framework to consume privacy policies
Artificial Intelligence
Helps computers understand website privacy rules.