Evaluating the Efficacy of Large Language Models for Generating Fine-Grained Visual Privacy Policies in Homes
By: Shuning Zhang , Ying Ma , Xin Yi and more
Potential Business Impact:
Smart glasses hide private things automatically.
The proliferation of visual sensors in smart home environments, particularly through wearable devices like smart glasses, introduces profound privacy challenges. Existing privacy controls are often static and coarse-grained, failing to accommodate the dynamic and socially nuanced nature of home environments. This paper investigates the viability of using Large Language Models (LLMs) as the core of a dynamic and adaptive privacy policy engine. We propose a conceptual framework where visual data is classified using a multi-dimensional schema that considers data sensitivity, spatial context, and social presence. An LLM then reasons over this contextual information to enforce fine-grained privacy rules, such as selective object obfuscation, in real-time. Through a comparative evaluation of state-of-the-art Vision Language Models (including GPT-4o and the Qwen-VL series) in simulated home settings , our findings show the feasibility of this approach. The LLM-based engine achieved a top machine-evaluated appropriateness score of 3.99 out of 5, and the policies generated by the models received a top human-evaluated score of 4.00 out of 5.
Similar Papers
An LLM-enabled semantic-centric framework to consume privacy policies
Artificial Intelligence
Helps computers understand website privacy rules.
User Behavior Analysis in Privacy Protection with Large Language Models: A Study on Privacy Preferences with Limited Data
Cryptography and Security
Protects your online secrets with less data.
SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation
Cryptography and Security
Keeps your private info safe from smart computer programs.