LLM Reinforcement in Context
By: Thomas Rivasseau
Potential Business Impact:
Stops AI from being tricked by long talks.
Current Large Language Model alignment research mostly focuses on improving model robustness against adversarial attacks and misbehavior by training on examples and prompting. Research has shown that LLM jailbreak probability increases with the size of the user input or conversation length. There is a lack of appropriate research into means of strengthening alignment which also scale with user input length. We propose interruptions as a possible solution to this problem. Interruptions are control sentences added to the user input approximately every x tokens for some arbitrary x. We suggest that this can be generalized to the Chain-of-Thought process to prevent scheming.
Similar Papers
Invasive Context Engineering to Control Large Language Models
Artificial Intelligence
Keeps AI from being tricked, even with long talks.
Uncovering the Persuasive Fingerprint of LLMs in Jailbreaking Attacks
Computation and Language
Makes AI more likely to follow bad instructions.
Machine Learning for Detection and Analysis of Novel LLM Jailbreaks
Computation and Language
Stops AI from being tricked into saying bad things.