Dynamic Alignment for Collective Agency: Toward a Scalable Self-Improving Framework for Open-Ended LLM Alignment
By: Panatchakorn Anantaprayoon , Nataliia Babina , Jad Tarifi and more
Potential Business Impact:
AI learns to improve itself with new goals.
Large Language Models (LLMs) are typically aligned with human values using preference data or predefined principles such as helpfulness, honesty, and harmlessness. However, as AI systems progress toward Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI), such value systems may become insufficient. In addition, human feedback-based alignment remains resource-intensive and difficult to scale. While AI-feedback-based self-improving alignment methods have been explored as a scalable alternative, they have largely remained constrained to conventional alignment values. In this work, we explore both a more holistic alignment objective and a scalable, self-improving alignment approach. Aiming to transcend conventional alignment norms, we introduce Collective Agency (CA)-a unified and open-ended alignment value that encourages integrated agentic capabilities. We also propose Dynamic Alignment-an alignment framework that enables an LLM to iteratively align itself. Dynamic Alignment comprises two key components: (1) automated training dataset generation with LLMs, and (2) a self-rewarding mechanism, where the policy model evaluates its own output candidates and assigns rewards for GRPO-based learning. Experimental results demonstrate that our approach successfully aligns the model to CA while preserving general NLP capabilities.
Similar Papers
Learning Robust Social Strategies with Large Language Models
Machine Learning (CS)
Teaches AI to work together, not cheat.
Improving Model Alignment Through Collective Intelligence of Open-Source LLMS
Computation and Language
Makes AI smarter by using many AI helpers.
Multi-level Value Alignment in Agentic AI Systems: Survey and Perspectives
Artificial Intelligence
Makes AI agents follow human rules and values.