Collaborative Causal Sensemaking: Closing the Complementarity Gap in Human-AI Decision Support
By: Raunak Jain, Mudita Khurana
Potential Business Impact:
Helps AI work with people, not just for them.
LLM-based agents are increasingly deployed for expert decision support, yet human-AI teams in high-stakes settings do not yet reliably outperform the best individual. We argue this complementarity gap reflects a fundamental mismatch: current agents are trained as answer engines, not as partners in the collaborative sensemaking through which experts actually make decisions. Sensemaking (the ability to co-construct causal explanations, surface uncertainties, and adapt goals) is the key capability that current training pipelines do not explicitly develop or evaluate. We propose Collaborative Causal Sensemaking (CCS) as a research agenda to develop this capability from the ground up, spanning new training environments that reward collaborative thinking, representations for shared human-AI mental models, and evaluation centred on trust and complementarity. These directions can advance MAS research toward agents that think with their human partners rather than for them.
Similar Papers
Collaborative Causal Sensemaking: Closing the Complementarity Gap in Human-AI Decision Support
Computation and Language
Helps AI and people work together to make better decisions.
The Collaboration Gap
Artificial Intelligence
AI teams struggle to work together, need better training.
Sensemaking in Novel Environments: How Human Cognition Can Inform Artificial Agents
Artificial Intelligence
Lets computers understand new things like people do.