Collaborative Causal Sensemaking: Closing the Complementarity Gap in Human-AI Decision Support
By: Raunak Jain, Mudita Khurana
Potential Business Impact:
Teaches AI to think with people, not just answer.
LLM-based agents are increasingly deployed for expert decision support, yet human-AI teams in high-stakes settings do not yet reliably outperform the best individual. We argue this complementarity gap reflects a fundamental mismatch: current agents are trained as answer engines, not as partners in the collaborative sensemaking through which experts actually make decisions. Sensemaking (the ability to co-construct causal explanations, surface uncertainties, and adapt goals) is the key capability that current training pipelines do not explicitly develop or evaluate. We propose Collaborative Causal Sensemaking (CCS) as a research agenda to develop this capability from the ground up, spanning new training environments that reward collaborative thinking, representations for shared human-AI mental models, and evaluation centred on trust and complementarity. Taken together, these directions shift MAS research from building oracle-like answer engines to cultivating AI teammates that co-reason with their human partners over the causal structure of shared decisions, advancing the design of effective human-AI teams.
Similar Papers
Collaborative Causal Sensemaking: Closing the Complementarity Gap in Human-AI Decision Support
Computation and Language
Helps AI work with people, not just for them.
Collaborative Causal Sensemaking: Closing the Complementarity Gap in Human-AI Decision Support
Computation and Language
Helps AI and people work together to make better decisions.
The Collaboration Gap
Artificial Intelligence
AI teams struggle to work together, need better training.