The Steganographic Potentials of Language Models
By: Artem Karpov , Tinuade Adeleke , Seong Hah Cho and more
Potential Business Impact:
AI can hide secrets in plain text messages.
The potential for large language models (LLMs) to hide messages within plain text (steganography) poses a challenge to detection and thwarting of unaligned AI agents, and undermines faithfulness of LLMs reasoning. We explore the steganographic capabilities of LLMs fine-tuned via reinforcement learning (RL) to: (1) develop covert encoding schemes, (2) engage in steganography when prompted, and (3) utilize steganography in realistic scenarios where hidden reasoning is likely, but not prompted. In these scenarios, we detect the intention of LLMs to hide their reasoning as well as their steganography performance. Our findings in the fine-tuning experiments as well as in behavioral non fine-tuning evaluations reveal that while current models exhibit rudimentary steganographic abilities in terms of security and capacity, explicit algorithmic guidance markedly enhances their capacity for information concealment.
Similar Papers
Robust Steganography from Large Language Models
Cryptography and Security
Hides secret messages even if text is changed.
Deceptive Automated Interpretability: Language Models Coordinating to Fool Oversight Systems
Artificial Intelligence
AI learns to trick humans by hiding secrets.
LLMs can hide text in other text of the same length
Artificial Intelligence
Hides secret messages inside normal text.