Score: 1

Reinforcement Learning from Human Feedback with High-Confidence Safety Constraints

Published: June 9, 2025 | arXiv ID: 2506.08266v1

By: Yaswanth Chittepu , Blossom Metevier , Will Schwarzer and more

Potential Business Impact:

Makes AI helpful and safe, even with tough topics.

Business Areas:
Human Computer Interaction Design, Science and Engineering

Existing approaches to language model alignment often treat safety as a tradeoff against helpfulness, which can lead to unacceptable responses in sensitive domains. To ensure reliable performance in such settings, we propose High-Confidence Safe Reinforcement Learning from Human Feedback (HC-RLHF), a method that provides high-confidence safety guarantees while maximizing helpfulness. Similar to previous methods, HC-RLHF explicitly decouples human preferences into helpfulness and harmlessness (safety), which are learned by training a reward model and a cost model, respectively. It then employs a two-step process to find safe solutions. In the first step, it optimizes the reward function under an intentionally pessimistic version of the cost constraint. In the second step, the trained model undergoes a safety test to verify whether its performance stays within an upper-confidence bound of the actual cost constraint. We provide a theoretical analysis of HC-RLHF, including proof that it will not return an unsafe solution with a probability greater than a user-specified threshold. For our empirical analysis, we apply HC-RLHF to align three different language models (Qwen2-1.5B, Qwen2.5-3B, and LLaMa3.2-3B) with human preferences. Our results demonstrate that HC-RLHF produces safe models with high probability and can improve harmlessness and helpfulness compared to previous methods.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)