Human-AI Interaction Alignment: Designing, Evaluating, and Evolving Value-Centered AI For Reciprocal Human-AI Futures
By: Hua Shen , Tiffany Knearem , Divy Thakkar and more
Potential Business Impact:
Teaches AI and people to learn from each other.
The rapid integration of generative AI into everyday life underscores the need to move beyond unidirectional alignment models that only adapt AI to human values. This workshop focuses on bidirectional human-AI alignment, a dynamic, reciprocal process where humans and AI co-adapt through interaction, evaluation, and value-centered design. Building on our past CHI 2025 BiAlign SIG and ICLR 2025 Workshop, this workshop will bring together interdisciplinary researchers from HCI, AI, social sciences and more domains to advance value-centered AI and reciprocal human-AI collaboration. We focus on embedding human and societal values into alignment research, emphasizing not only steering AI toward human values but also enabling humans to critically engage with and evolve alongside AI systems. Through talks, interdisciplinary discussions, and collaborative activities, participants will explore methods for interactive alignment, frameworks for societal impact evaluation, and strategies for alignment in dynamic contexts. This workshop aims to bridge the disciplines' gaps and establish a shared agenda for responsible, reciprocal human-AI futures.
Similar Papers
The Human-AI Handshake Framework: A Bidirectional Approach to Human-AI Collaboration
Human-Computer Interaction
AI learns with you, like a team.
Co-Alignment: Rethinking Alignment as Bidirectional Human-AI Cognitive Adaptation
Artificial Intelligence
Humans and AI learn together for better teamwork.
Co-Alignment: Rethinking Alignment as Bidirectional Human-AI Cognitive Adaptation
Artificial Intelligence
Humans and AI learn together, improving teamwork.