Towards Aligning Personalized Conversational Recommendation Agents with Users' Privacy Preferences
By: Shuning Zhang , Ying Ma , Jingruo Chen and more
Potential Business Impact:
AI learns your privacy rules to protect you.
The proliferation of AI agents, with their complex and context-dependent actions, renders conventional privacy paradigms obsolete. This position paper argues that the current model of privacy management, rooted in a user's unilateral control over a passive tool, is inherently mismatched with the dynamic and interactive nature of AI agents. We contend that ensuring effective privacy protection necessitates that the agents proactively align with users' privacy preferences instead of passively waiting for the user to control. To ground this shift, and using personalized conversational recommendation agents as a case, we propose a conceptual framework built on Contextual Integrity (CI) theory and Privacy Calculus theory. This synthesis first reframes automatically controlling users' privacy as an alignment problem, where AI agents initially did not know users' preferences, and would learn their privacy preferences through implicit or explicit feedback. Upon receiving the preference feedback, the agents used alignment and Pareto optimization for aligning preferences and balancing privacy and utility. We introduced formulations and instantiations, potential applications, as well as five challenges.
Similar Papers
Position: Human-Robot Interaction in Embodied Intelligence Demands a Shift From Static Privacy Controls to Dynamic Learning
Human-Computer Interaction
Keeps your private information safe from smart robots.
Acceptability of AI Assistants for Privacy: Perceptions of Experts and Users on Personalized Privacy Assistants
Human-Computer Interaction
AI helps you manage privacy without thinking.
Rethinking User Empowerment in AI Recommender Systems: Designing through Transparency and Control
Human-Computer Interaction
Lets you control what online stuff you see.