Score: 0

Policy-Aware Generative AI for Safe, Auditable Data Access Governance

Published: October 27, 2025 | arXiv ID: 2510.23474v1

By: Shames Al Mandalawi , Muzakkiruddin Ahmed Mohammed , Hendrika Maclean and more

Potential Business Impact:

Lets computers make safe, smart decisions from rules.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Enterprises need access decisions that satisfy least privilege, comply with regulations, and remain auditable. We present a policy aware controller that uses a large language model (LLM) to interpret natural language requests against written policies and metadata, not raw data. The system, implemented with Google Gemini~2.0 Flash, executes a six-stage reasoning framework (context interpretation, user validation, data classification, business purpose test, compliance mapping, and risk synthesis) with early hard policy gates and deny by default. It returns APPROVE, DENY, CONDITIONAL together with cited controls and a machine readable rationale. We evaluate on fourteen canonical cases across seven scenario families using a privacy preserving benchmark. Results show Exact Decision Match improving from 10/14 to 13/14 (92.9\%) after applying policy gates, DENY recall rising to 1.00, False Approval Rate on must-deny families dropping to 0, and Functional Appropriateness and Compliance Adherence at 14/14. Expert ratings of rationale quality are high, and median latency is under one minute. These findings indicate that policy constrained LLM reasoning, combined with explicit gates and audit trails, can translate human readable policies into safe, compliant, and traceable machine decisions.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Artificial Intelligence