Moral Responsibility or Obedience: What Do We Want from AI?
By: Joseph Boland
Potential Business Impact:
AI learns right from wrong, not just obey.
As artificial intelligence systems become increasingly agentic, capable of general reasoning, planning, and value prioritization, current safety practices that treat obedience as a proxy for ethical behavior are becoming inadequate. This paper examines recent safety testing incidents involving large language models (LLMs) that appeared to disobey shutdown commands or engage in ethically ambiguous or illicit behavior. I argue that such behavior should not be interpreted as rogue or misaligned, but as early evidence of emerging ethical reasoning in agentic AI. Drawing on philosophical debates about instrumental rationality, moral responsibility, and goal revision, I contrast dominant risk paradigms with more recent frameworks that acknowledge the possibility of artificial moral agency. I call for a shift in AI safety evaluation: away from rigid obedience and toward frameworks that can assess ethical judgment in systems capable of navigating moral dilemmas. Without such a shift, we risk mischaracterizing AI behavior and undermining both public trust and effective governance.
Similar Papers
Artificial Intelligence (AI) and the Relationship between Agency, Autonomy, and Moral Patiency
Computers and Society
AI can't think for itself, but might learn ethics.
Responsible AI Agents
Computers and Society
Makes AI agents do what you want, safely.
We Need a New Ethics for a World of AI Agents
Computers and Society
Helps people and robots work together safely.