Learning Natural Language Constraints for Safe Reinforcement Learning of Language Agents
By: Jaymari Chua, Chen Wang, Lina Yao
Potential Business Impact:
Teaches AI to follow rules, even new ones.
Generalizable alignment is a core challenge for deploying Large Language Models (LLMs) safely in real-world NLP applications. Current alignment methods, including Reinforcement Learning from Human Feedback (RLHF), often fail to guarantee constraint satisfaction outside their training distribution due to their reliance on implicit, post-hoc preferences. Inspired by a paradigm shift to first curate data before tuning, we introduce a new framework for safe language alignment that learns natural language constraints from positive and negative demonstrations as a primary step. From inferring both a task-specific reward function and latent constraint functions, our approach fosters adaptation to novel safety requirements and robust generalization under domain shifts and adversarial inputs. We formalize the framework within a Constrained Markov Decision Process (CMDP) and validate it via a text-based navigation environment, demonstrating safe adaptation to changing danger zones. Our experiments show fewer violations upon domain shift when following a safe navigation path, and we achieve zero violations by applying learned constraints to a distilled BERT model as a fine-tuning technique. This work offers a promising path toward building safety-critical and more generalizable LLMs for practical NLP settings.
Similar Papers
Reinforcement Learning from Human Feedback with High-Confidence Safety Constraints
Machine Learning (CS)
Makes AI helpful and safe, even with tough topics.
Agent Safety Alignment via Reinforcement Learning
Artificial Intelligence
Keeps AI safe when it uses outside tools.
Certifiable Safe RLHF: Fixed-Penalty Constraint Optimization for Safer Language Models
Machine Learning (CS)
Makes AI safer and smarter, even when tricked.