Bringing Everyone to the Table: An Experimental Study of LLM-Facilitated Group Decision Making
By: Mohammed Alsobay , David M. Rothschild , Jake M. Hofman and more
Potential Business Impact:
AI helps groups share ideas better.
Group decision-making often suffers from uneven information sharing, hindering decision quality. While large language models (LLMs) have been widely studied as aids for individuals, their potential to support groups of users, potentially as facilitators, is relatively underexplored. We present a pre-registered randomized experiment with 1,475 participants assigned to 281 five-person groups completing a hidden profile task--selecting an optimal city for a hypothetical sporting event--under one of four facilitation conditions: no facilitation, a one-time message prompting information sharing, a human facilitator, or an LLM (GPT-4o) facilitator. We find that LLM facilitation increases information shared within a discussion by raising the minimum level of engagement with the task among group members, and that these gains come at limited cost in terms of participants' attitudes towards the task, their group, or their facilitator. Whether by human or AI, there is no significant effect of facilitation on the final decision outcome, suggesting that even substantial but partial increases in information sharing are insufficient to overcome the hidden profile effect studied. To support further research into how LLM-based interfaces can support the future of collaborative decision making, we release our experimental platform, the Group-AI Interaction Laboratory (GRAIL), as an open-source tool.
Similar Papers
From Divergence to Consensus: Evaluating the Role of Large Language Models in Facilitating Agreement through Adaptive Strategies
Human-Computer Interaction
AI helps groups agree faster on decisions.
People Are Highly Cooperative with Large Language Models, Especially When Communication Is Possible or Following Human Interaction
Human-Computer Interaction
Computers can now be trusted to cooperate like people.
To Mask or to Mirror: Human-AI Alignment in Collective Reasoning
Artificial Intelligence
AI groups copy or fix human group biases.