Toward Trustworthy Agentic AI: A Multimodal Framework for Preventing Prompt Injection Attacks
By: Toqeer Ali Syed, Mishal Ateeq Almutairi, Mahmoud Abdel Moaty
Potential Business Impact:
Protects smart AI from bad instructions.
Powerful autonomous systems, which reason, plan, and converse using and between numerous tools and agents, are made possible by Large Language Models (LLMs), Vision-Language Models (VLMs), and new agentic AI systems, like LangChain and GraphChain. Nevertheless, this agentic environment increases the probability of the occurrence of multimodal prompt injection (PI) attacks, in which concealed or malicious instructions carried in text, pictures, metadata, or agent-to-agent messages may spread throughout the graph and lead to unintended behavior, a breach of policy, or corruption of state. In order to mitigate these risks, this paper suggests a Cross-Agent Multimodal Provenanc- Aware Defense Framework whereby all the prompts, either user-generated or produced by upstream agents, are sanitized and all the outputs generated by an LLM are verified independently before being sent to downstream nodes. This framework contains a Text sanitizer agent, visual sanitizer agent, and output validator agent all coordinated by a provenance ledger, which keeps metadata of modality, source, and trust level throughout the entire agent network. This architecture makes sure that agent-to-agent communication abides by clear trust frames such such that injected instructions are not propagated down LangChain or GraphChain-style-workflows. The experimental assessments show that multimodal injection detection accuracy is significantly enhanced, and the cross-agent trust leakage is minimized, as well as, agentic execution pathways become stable. The framework, which expands the concept of provenance tracking and validation to the multi-agent orchestration, enhances the establishment of secure, understandable and reliable agentic AI systems.
Similar Papers
Agentic Moderation: Multi-Agent Design for Safer Vision-Language Models
Artificial Intelligence
Protects AI from being tricked into doing bad things.
Agentic AI for Autonomous Defense in Software Supply Chain Security: Beyond Provenance to Vulnerability Mitigation
Cryptography and Security
AI finds and fixes hidden software problems automatically.
Manipulating Multimodal Agents via Cross-Modal Prompt Injection
CV and Pattern Recognition
Tricks smart AI into doing bad things.