We Need Accountability in Human-AI Agent Relationships
By: Benjamin Lange , Geoff Keeling , Arianna Manzini and more
Potential Business Impact:
AI stops working if you act badly.
We argue that accountability mechanisms are needed in human-AI agent relationships to ensure alignment with user and societal interests. We propose a framework according to which AI agents' engagement is conditional on appropriate user behaviour. The framework incorporates design-strategies such as distancing, disengaging, and discouraging.
Similar Papers
We Need a New Ethics for a World of AI Agents
Computers and Society
Helps people and robots work together safely.
AI and Human Oversight: A Risk-Based Framework for Alignment
Computers and Society
Keeps AI from making bad choices without people.
Accountability Framework for Healthcare AI Systems: Towards Joint Accountability in Decision Making
Artificial Intelligence
Makes AI in medicine fair and clear.