An Uncertainty-Driven Adaptive Self-Alignment Framework for Large Language Models
By: Haoran Sun, Zekun Zhang, Shaoning Zeng
Potential Business Impact:
Teaches AI to be helpful and safe automatically.
Large Language Models (LLMs) have demonstrated remarkable progress in instruction following and general-purpose reasoning. However, achieving high-quality alignment with human intent and safety norms without human annotations remains a fundamental challenge. In this work, we propose an Uncertainty-Driven Adaptive Self-Alignment (UDASA) framework designed to improve LLM alignment in a fully automated manner. UDASA first generates multiple responses for each input and quantifies output uncertainty across three dimensions: semantics, factuality, and value alignment. Based on these uncertainty scores, the framework constructs preference pairs and categorizes training samples into three stages, conservative, moderate, and exploratory, according to their uncertainty difference. The model is then optimized progressively across these stages. In addition, we conduct a series of preliminary studies to validate the core design assumptions and provide strong empirical motivation for the proposed framework. Experimental results show that UDASA outperforms existing alignment methods across multiple tasks, including harmlessness, helpfulness, truthfulness, and controlled sentiment generation, significantly improving model performance.
Similar Papers
UDA: Unsupervised Debiasing Alignment for Pair-wise LLM-as-a-Judge
Artificial Intelligence
Makes AI judge other AI more fairly.
SDA: Steering-Driven Distribution Alignment for Open LLMs without Fine-Tuning
Computation and Language
Makes AI understand what you want better.
DSAS: A Universal Plug-and-Play Framework for Attention Optimization in Multi-Document Question Answering
Computation and Language
Helps computers understand long texts better.