Safe, Untrusted, "Proof-Carrying" AI Agents: toward the agentic lakehouse
By: Jacopo Tagliabue, Ciro Greco
Potential Business Impact:
Lets AI fix data without breaking it.
Data lakehouses run sensitive workloads, where AI-driven automation raises concerns about trust, correctness, and governance. We argue that API-first, programmable lakehouses provide the right abstractions for safe-by-design, agentic workflows. Using Bauplan as a case study, we show how data branching and declarative environments extend naturally to agents, enabling reproducibility and observability while reducing the attack surface. We present a proof-of-concept in which agents repair data pipelines using correctness checks inspired by proof-carrying code. Our prototype demonstrates that untrusted AI agents can operate safely on production data and outlines a path toward a fully agentic lakehouse.
Similar Papers
Trustworthy AI in the Agentic Lakehouse: from Concurrency to Governance
Artificial Intelligence
Makes AI agents safe for important company data.
AGENTSAFE: A Unified Framework for Ethical Assurance and Governance in Agentic AI
Multiagent Systems
Makes AI agents safer and more trustworthy.
Formalizing the Safety, Security, and Functional Properties of Agentic AI Systems
Artificial Intelligence
Makes smart robots work together safely and reliably.