Please Don't Kill My Vibe: Empowering Agents with Data Flow Control
By: Charlie Summers, Haneen Mohammed, Eugene Wu
The promise of Large Language Model (LLM) agents is to perform complex, stateful tasks. This promise is stunted by significant risks - policy violations, process corruption, and security flaws - that stem from the lack of visibility and mechanisms to manage undesirable data flows produced by agent actions. Today, agent workflows are responsible for enforcing these policies in ad hoc ways. Just as data validation and access controls shifted from the application to the DBMS, freeing application developers from these concerns, we argue that systems should support Data Flow Controls (DFCs) and enforce DFC policies natively. This paper describes early work developing a portable instance of DFC for DBMSes and outlines a broader research agenda toward DFC for agent ecosystems.
Similar Papers
SAFEFLOW: A Principled Protocol for Trustworthy and Transactional Autonomous Agent Systems
Artificial Intelligence
Makes AI agents safer and more reliable.
Data Agent: A Holistic Architecture for Orchestrating Data+AI Ecosystems
Databases
Lets computers build smart data plans alone.
Breaking and Fixing Defenses Against Control-Flow Hijacking in Multi-Agent Systems
Machine Learning (CS)
Stops computer agents from doing bad things.