Enabling Ethical AI: A case study in using Ontological Context for Justified Agentic AI Decisions
By: Liam McGee , James Harvey , Lucy Cull and more
Potential Business Impact:
Makes AI smarter and more trustworthy for everyone.
In this preprint, we present A collaborative human-AI approach to building an inspectable semantic layer for Agentic AI. AI agents first propose candidate knowledge structures from diverse data sources; domain experts then validate, correct, and extend these structures, with their feedback used to improve subsequent models. Authors show how this process captures tacit institutional knowledge, improves response quality and efficiency, and mitigates institutional amnesia. We argue for a shift from post-hoc explanation to justifiable Agentic AI, where decisions are grounded in explicit, inspectable evidence and reasoning accessible to both experts and non-specialists.
Similar Papers
Ethical AI: Towards Defining a Collective Evaluation Framework
Artificial Intelligence
Makes AI fair and understandable for everyone.
Development of management systems using artificial intelligence systems and machine learning methods for boards of directors (preprint, unofficial translation)
Computers and Society
Makes AI leaders follow rules fairly and safely.
DAO-AI: Evaluating Collective Decision-Making through Agentic AI in Decentralized Governance
Artificial Intelligence
AI votes on money rules like people.