A neuro-symbolic framework for accountability in public-sector AI
By: Allen Daniel Sunny
Automated eligibility systems increasingly determine access to essential public benefits, but the explanations they generate often fail to reflect the legal rules that authorize those decisions. This thesis develops a legally grounded explainability framework that links system-generated decision justifications to the statutory constraints of CalFresh, California's Supplemental Nutrition Assistance Program. The framework combines a structured ontology of eligibility requirements derived from the state's Manual of Policies and Procedures (MPP), a rule extraction pipeline that expresses statutory logic in a verifiable formal representation, and a solver-based reasoning layer to evaluate whether the explanation aligns with governing law. Case evaluations demonstrate the framework's ability to detect legally inconsistent explanations, highlight violated eligibility rules, and support procedural accountability by making the basis of automated determinations traceable and contestable.
Similar Papers
Bridging Human Cognition and AI: A Framework for Explainable Decision-Making Systems
Statistical Finance
Makes AI easier to understand and trust.
Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being
Artificial Intelligence
Helps doctors trust computer health advice.
A Framework for Causal Concept-based Model Explanations
Artificial Intelligence
Explains how AI makes decisions using simple ideas.