Research on Superalignment Should Advance Now with Parallel Optimization of Competence and Conformity
By: HyunJin Kim , Xiaoyuan Yi , Jing Yao and more
Potential Business Impact:
Teaches super-smart AI to be good.
The recent leap in AI capabilities, driven by big generative models, has sparked the possibility of achieving Artificial General Intelligence (AGI) and further triggered discussions on Artificial Superintelligence (ASI), a system surpassing all humans across all domains. This gives rise to the critical research question of: If we realize ASI, how do we align it with human values, ensuring it benefits rather than harms human society, a.k.a., the Superalignment problem. Despite ASI being regarded by many as solely a hypothetical concept, in this paper, we argue that superalignment is achievable and research on it should advance immediately, through simultaneous and alternating optimization of task competence and value conformity. We posit that superalignment is not merely a safeguard for ASI but also necessary for its realization. To support this position, we first provide a formal definition of superalignment rooted in the gap between capability and capacity and elaborate on our argument. Then we review existing paradigms, explore their interconnections and limitations, and illustrate a potential path to superalignment centered on two fundamental principles. We hope this work sheds light on a practical approach for developing the value-aligned next-generation AI, garnering greater benefits and reducing potential harms for humanity.
Similar Papers
Super Co-alignment of Human and AI for Sustainable Symbiotic Society
Artificial Intelligence
Makes super-smart AI learn good values with us.
Neurodivergent Influenceability as a Contingent Solution to the AI Alignment Problem
Artificial Intelligence
Makes AI work with humans, not against them.
Aligning Artificial Superintelligence via a Multi-Box Protocol
Artificial Intelligence
Makes super-smart AI agree on truth, not lies.