Ethics Readiness of Artificial Intelligence: A Practical Evaluation Method
By: Laurynas Adomaitis, Vincent Israel-Jost, Alexei Grinbaum
We present Ethics Readiness Levels (ERLs), a four-level, iterative method to track how ethical reflection is implemented in the design of AI systems. ERLs bridge high-level ethical principles and everyday engineering by turning ethical values into concrete prompts, checks, and controls within real use cases. The evaluation is conducted using a dynamic, tree-like questionnaire built from context-specific indicators, ensuring relevance to the technology and application domain. Beyond being a managerial tool, ERLs help facilitate a structured dialogue between ethics experts and technical teams, while our scoring system helps track progress over time. We demonstrate the methodology through two case studies: an AI facial sketch generator for law enforcement and a collaborative industrial robot. The ERL tool effectively catalyzes concrete design changes and promotes a shift from narrow technological solutionism to a more reflective, ethics-by-design mindset.
Similar Papers
Ethics Readiness of Technology: The case for aligning ethical approaches with technological maturity
Computers and Society
Helps decide when to worry about new tech.
The Quest for Reliable Metrics of Responsible AI
Computers and Society
Makes AI fair and trustworthy for everyone.
RAIL in the Wild: Operationalizing Responsible AI Evaluation Using Anthropic's Value Dataset
Artificial Intelligence
Checks if AI acts fairly and honestly.