Injecting Falsehoods: Adversarial Man-in-the-Middle Attacks Undermining Factual Recall in LLMs
By: Alina Fastowski , Bardh Prenkaj , Yuxiao Li and more
Potential Business Impact:
Makes AI chatbots less likely to lie.
LLMs are now an integral part of information retrieval. As such, their role as question answering chatbots raises significant concerns due to their shown vulnerability to adversarial man-in-the-middle (MitM) attacks. Here, we propose the first principled attack evaluation on LLM factual memory under prompt injection via Xmera, our novel, theory-grounded MitM framework. By perturbing the input given to "victim" LLMs in three closed-book and fact-based QA settings, we undermine the correctness of the responses and assess the uncertainty of their generation process. Surprisingly, trivial instruction-based attacks report the highest success rate (up to ~85.3%) while simultaneously having a high uncertainty for incorrectly answered questions. To provide a simple defense mechanism against Xmera, we train Random Forest classifiers on the response uncertainty levels to distinguish between attacked and unattacked queries (average AUC of up to ~96%). We believe that signaling users to be cautious about the answers they receive from black-box and potentially corrupt LLMs is a first checkpoint toward user cyberspace safety.
Similar Papers
Too Easily Fooled? Prompt Injection Breaks LLMs on Frustratingly Simple Multiple-Choice Questions
Cryptography and Security
Computers can be tricked by hidden instructions.
Battling Misinformation: An Empirical Study on Adversarial Factuality in Open-Source Large Language Models
Computation and Language
Helps computers spot fake facts in questions.
Publish to Perish: Prompt Injection Attacks on LLM-Assisted Peer Review
Cryptography and Security
Tricks AI into writing fake science reviews.