When Robots Say No: The Empathic Ethical Disobedience Benchmark
By: Dmytro Kuzmenko, Nadiya Shvai
Robots must balance compliance with safety and social expectations as blind obedience can cause harm, while over-refusal erodes trust. Existing safe reinforcement learning (RL) benchmarks emphasize physical hazards, while human-robot interaction trust studies are small-scale and hard to reproduce. We present the Empathic Ethical Disobedience (EED) Gym, a standardized testbed that jointly evaluates refusal safety and social acceptability. Agents weigh risk, affect, and trust when choosing to comply, refuse (with or without explanation), clarify, or propose safer alternatives. EED Gym provides different scenarios, multiple persona profiles, and metrics for safety, calibration, and refusals, with trust and blame models grounded in a vignette study. Using EED Gym, we find that action masking eliminates unsafe compliance, while explanatory refusals help sustain trust. Constructive styles are rated most trustworthy, empathic styles -- most empathic, and safe RL methods improve robustness but also make agents more prone to overly cautious behavior. We release code, configurations, and reference policies to enable reproducible evaluation and systematic human-robot interaction research on refusal and trust. At submission time, we include an anonymized reproducibility package with code and configs, and we commit to open-sourcing the full repository after the paper is accepted.
Similar Papers
Towards Emotionally Intelligent and Responsible Reinforcement Learning
Machine Learning (CS)
Helps computers understand feelings to give better advice.
Energy-Driven Steering: Reducing False Refusals in Large Language Models
Machine Learning (CS)
Makes AI helpful without being too scared.
GentleHumanoid: Learning Upper-body Compliance for Contact-rich Human and Object Interaction
Robotics
Robots can gently hug and help people.