Score: 1

Benchmarking LLM Privacy Recognition for Social Robot Decision Making

Published: July 22, 2025 | arXiv ID: 2507.16124v2

By: Dakota Sullivan , Shirley Zhang , Jennica Li and more

Potential Business Impact:

Robots learn to protect your private home information.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

While robots have previously utilized rule-based systems or probabilistic models for user interaction, the rapid evolution of large language models (LLMs) presents new opportunities to develop LLM-powered robots for enhanced human-robot interaction (HRI). To fully realize these capabilities, however, robots need to collect data such as audio, fine-grained images, video, and locations. As a result, LLMs often process sensitive personal information, particularly within private environments, such as homes. Given the tension between utility and privacy risks, evaluating how current LLMs manage sensitive data is critical. Specifically, we aim to explore the extent to which out-of-the-box LLMs are privacy-aware in the context of household robots. In this work, we present a set of privacy-relevant scenarios developed using the Contextual Integrity (CI) framework. We first surveyed users' privacy preferences regarding in-home robot behaviors and then examined how their privacy orientations affected their choices of these behaviors (N = 450). We then provided the same set of scenarios and questions to state-of-the-art LLMs (N = 10) and found that the agreement between humans and LLMs was generally low. To further investigate the capabilities of LLMs as potential privacy controllers, we implemented four additional prompting strategies and compared their results. We discuss the performance of the evaluated models as well as the implications and potential of AI privacy awareness in human-robot interaction.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
18 pages

Category
Computer Science:
Robotics