Can LLMs Make (Personalized) Access Control Decisions?
By: Friederike Groschupp , Daniele Lain , Aritra Dhar and more
Potential Business Impact:
AI helps apps decide who sees your data.
Precise access control decisions are crucial to the security of both traditional applications and emerging agent-based systems. Typically, these decisions are made by users during app installation or at runtime. Due to the increasing complexity and automation of systems, making these access control decisions can add a significant cognitive load on users, often overloading them and leading to suboptimal or even arbitrary access control decisions. To address this problem, we propose to leverage the processing and reasoning capabilities of large language models (LLMs) to make dynamic, context-aware decisions aligned with the user's security preferences. For this purpose, we conducted a user study, which resulted in a dataset of 307 natural-language privacy statements and 14,682 access control decisions made by users. We then compare these decisions against those made by two versions of LLMs: a general and a personalized one, for which we also gathered user feedback on 1,446 of its decisions. Our results show that in general, LLMs can reflect users' preferences well, achieving up to 86\% accuracy when compared to the decision made by the majority of users. Our study also reveals a crucial trade-off in personalizing such a system: while providing user-specific privacy preferences to the LLM generally improves agreement with individual user decisions, adhering to those preferences can also violate some security best practices. Based on our findings, we discuss design and risk considerations for implementing a practical natural-language-based access control system that balances personalization, security, and utility.
Similar Papers
LLMs in Cybersecurity: Friend or Foe in the Human Decision Loop?
Cryptography and Security
AI helps some people make better choices.
Towards Harnessing the Power of LLMs for ABAC Policy Mining
Cryptography and Security
Computers learn to make access rules automatically.
Responsible LLM Deployment for High-Stake Decisions by Decentralized Technologies and Human-AI Interactions
Computers and Society
Makes AI safer for important money choices.