Multi-Stakeholder Alignment in LLM-Powered Collaborative AI Systems: A Multi-Agent Framework for Intelligent Tutoring
By: Alexandre P Uchoa , Carlo E T Oliveira , Claudia L R Motta and more
Potential Business Impact:
Helps AI tutors learn what everyone wants.
The integration of Large Language Models into Intelligent Tutoring Systems pre-sents significant challenges in aligning with diverse and often conflicting values from students, parents, teachers, and institutions. Existing architectures lack for-mal mechanisms for negotiating these multi-stakeholder tensions, creating risks in accountability and bias. This paper introduces the Advisory Governance Layer (AGL), a non-intrusive, multi-agent framework designed to enable distributed stakeholder participation in AI governance. The AGL employs specialized agents representing stakeholder groups to evaluate pedagogical actions against their spe-cific policies in a privacy-preserving manner, anticipating future advances in per-sonal assistant technology that will enhance stakeholder value expression. Through a novel policy taxonomy and conflict-resolution protocols, the frame-work provides structured, auditable governance advice to the ITS without altering its core pedagogical decision-making. This work contributes a reference architec-ture and technical specifications for aligning educational AI with multi-stakeholder values, bridging the gap between high-level ethical principles and practical implementation.
Similar Papers
Multi-level Value Alignment in Agentic AI Systems: Survey and Perspectives
Artificial Intelligence
Makes AI agents follow human rules and values.
Multi-Agent Collaboration Mechanisms: A Survey of LLMs
Artificial Intelligence
Lets AI groups work together to solve hard problems.
Enabling Multi-Agent Systems as Learning Designers: Applying Learning Sciences to AI Instructional Design
Computers and Society
Helps teachers make better school lessons.