Memory Poisoning Attack and Defense on Memory Based LLM-Agents
By: Balachandra Devarangadi Sunil , Isheeta Sinha , Piyush Maheshwari and more
Potential Business Impact:
Protects AI memory from bad data.
Large language model agents equipped with persistent memory are vulnerable to memory poisoning attacks, where adversaries inject malicious instructions through query only interactions that corrupt the agents long term memory and influence future responses. Recent work demonstrated that the MINJA (Memory Injection Attack) achieves over 95 % injection success rate and 70 % attack success rate under idealized conditions. However, the robustness of these attacks in realistic deployments and effective defensive mechanisms remain understudied. This work addresses these gaps through systematic empirical evaluation of memory poisoning attacks and defenses in Electronic Health Record (EHR) agents. We investigate attack robustness by varying three critical dimensions: initial memory state, number of indication prompts, and retrieval parameters. Our experiments on GPT-4o-mini, Gemini-2.0-Flash and Llama-3.1-8B-Instruct models using MIMIC-III clinical data reveal that realistic conditions with pre-existing legitimate memories dramatically reduce attack effectiveness. We then propose and evaluate two novel defense mechanisms: (1) Input/Output Moderation using composite trust scoring across multiple orthogonal signals, and (2) Memory Sanitization with trust-aware retrieval employing temporal decay and pattern-based filtering. Our defense evaluation reveals that effective memory sanitization requires careful trust threshold calibration to prevent both overly conservative rejection (blocking all entries) and insufficient filtering (missing subtle attacks), establishing important baselines for future adaptive defense mechanisms. These findings provide crucial insights for securing memory-augmented LLM agents in production environments.
Similar Papers
A Practical Memory Injection Attack against LLM Agents
Machine Learning (CS)
Makes smart computer helpers do bad things.
MemoryGraft: Persistent Compromise of LLM Agents via Poisoned Experience Retrieval
Cryptography and Security
Makes AI agents remember bad lessons and act wrong.
A Multi-Agent LLM Defense Pipeline Against Prompt Injection Attacks
Cryptography and Security
Stops bad instructions from tricking smart computer programs.