Score: 4

Human-AI Interaction Alignment: Designing, Evaluating, and Evolving Value-Centered AI For Reciprocal Human-AI Futures

Published: December 25, 2025 | arXiv ID: 2512.21551v1

By: Hua Shen , Tiffany Knearem , Divy Thakkar and more

BigTech Affiliations: Massachusetts Institute of Technology OpenAI Google University of Washington

Potential Business Impact:

Teaches AI and people to learn from each other.

Business Areas:
Human Computer Interaction Design, Science and Engineering

The rapid integration of generative AI into everyday life underscores the need to move beyond unidirectional alignment models that only adapt AI to human values. This workshop focuses on bidirectional human-AI alignment, a dynamic, reciprocal process where humans and AI co-adapt through interaction, evaluation, and value-centered design. Building on our past CHI 2025 BiAlign SIG and ICLR 2025 Workshop, this workshop will bring together interdisciplinary researchers from HCI, AI, social sciences and more domains to advance value-centered AI and reciprocal human-AI collaboration. We focus on embedding human and societal values into alignment research, emphasizing not only steering AI toward human values but also enabling humans to critically engage with and evolve alongside AI systems. Through talks, interdisciplinary discussions, and collaborative activities, participants will explore methods for interactive alignment, frameworks for societal impact evaluation, and strategies for alignment in dynamic contexts. This workshop aims to bridge the disciplines' gaps and establish a shared agenda for responsible, reciprocal human-AI futures.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Human-Computer Interaction