Echoes of AI Harms: A Human-LLM Synergistic Framework for Bias-Driven Harm Anticipation
By: Nicoleta Tantalaki, Sophia Vei, Athena Vakali
Potential Business Impact:
Finds AI problems before they hurt people.
The growing influence of Artificial Intelligence (AI) systems on decision-making in critical domains has exposed their potential to cause significant harms, often rooted in biases embedded across the AI lifecycle. While existing frameworks and taxonomies document bias or harms in isolation, they rarely establish systematic links between specific bias types and the harms they cause, particularly within real-world sociotechnical contexts. Technical fixes proposed to address AI biases are ill-equipped to address them and are typically applied after a system has been developed or deployed, offering limited preventive value. We propose ECHO, a novel framework for proactive AI harm anticipation through the systematic mapping of AI bias types to harm outcomes across diverse stakeholder and domain contexts. ECHO follows a modular workflow encompassing stakeholder identification, vignette-based presentation of biased AI systems, and dual (human-LLM) harm annotation, integrated within ethical matrices for structured interpretation. This human-centered approach enables early-stage detection of bias-to-harm pathways, guiding AI design and governance decisions from the outset. We validate ECHO in two high-stakes domains (disease diagnosis and hiring), revealing domain-specific, bias-to-harm patterns and demonstrating ECHO's potential to support anticipatory governance of AI systems
Similar Papers
AI Harmonics: a human-centric and harms severity-adaptive AI risk assessment framework
Artificial Intelligence
Helps stop AI from causing harm.
LLM Harms: A Taxonomy and Discussion
Computers and Society
Makes AI safer and fairer for everyone.
AI and Human Oversight: A Risk-Based Framework for Alignment
Computers and Society
Keeps AI from making bad choices without people.