Aligning Large Language Models via Fully Self-Synthetic Data
By: Shangjian Yin , Zhepei Wei , Xinyu Zhu and more
Potential Business Impact:
Lets AI learn to be helpful by itself.
Traditional reinforcement learning from human feedback (RLHF) for large language models (LLMs) relies on expensive human-annotated datasets, while Reinforcement Learning from AI Feedback (RLAIF) also incurs significant costs, requiring the collection of diverse prompts and corresponding responses, often necessitating external reward models or proprietary models like GPT-4 to annotate preference pairs. In this work, we introduce Self-Alignment Optimization (SAO), a fully self-synthetic framework for LLM alignment, where all training data, including prompts (i.e., user queries), responses, and preferences, are generated by the model itself. Specifically, SAO first instructs the LLM to engage in persona role-play and generate diverse prompts and responses, which are then self-evaluated for preference optimization. Extensive experiments demonstrate that SAO effectively enhances the model's chat capabilities on standard benchmarks like AlpacaEval~2.0, while maintaining strong performance on downstream objective tasks (e.g., question-answering, math reasoning). Our work provides a practical solution for self-improvement in aligning LLMs, and the code for reproducing our results is available at: https://github.com/SJY8460/SAO.
Similar Papers
Clone-Robust AI Alignment
Machine Learning (CS)
Teaches AI to learn better from human choices.
Aligning Crowd-sourced Human Feedback for Reinforcement Learning on Code Generation by Large Language Models
Artificial Intelligence
Helps computers write code faster and better.
RLTHF: Targeted Human Feedback for LLM Alignment
Computation and Language
Teaches AI to be helpful with less human work.