MPCI-Bench: A Benchmark for Multimodal Pairwise Contextual Integrity Evaluation of Language Model Agents
By: Shouju Wang, Haopeng Zhang
As language-model agents evolve from passive chatbots into proactive assistants that handle personal data, evaluating their adherence to social norms becomes increasingly critical, often through the lens of Contextual Integrity (CI). However, existing CI benchmarks are largely text-centric and primarily emphasize negative refusal scenarios, overlooking multimodal privacy risks and the fundamental trade-off between privacy and utility. In this paper, we introduce MPCI-Bench, the first Multimodal Pairwise Contextual Integrity benchmark for evaluating privacy behavior in agentic settings. MPCI-Bench consists of paired positive and negative instances derived from the same visual source and instantiated across three tiers: normative Seed judgments, context-rich Story reasoning, and executable agent action Traces. Data quality is ensured through a Tri-Principle Iterative Refinement pipeline. Evaluations of state-of-the-art multimodal models reveal systematic failures to balance privacy and utility and a pronounced modality leakage gap, where sensitive visual information is leaked more frequently than textual information. We will open-source MPCI-Bench to facilitate future research on agentic CI.
Similar Papers
PrivaCI-Bench: Evaluating Privacy with Contextual Integrity and Legal Compliance
Computation and Language
Tests if AI keeps secrets safe and follows rules.
MCPAgentBench: A Real-world Task Benchmark for Evaluating LLM Agent MCP Tool Use
Artificial Intelligence
Tests how AI agents use tools to solve problems.
MCP-AgentBench: Evaluating Real-World Language Agent Performance with MCP-Mediated Tools
Computation and Language
Tests AI helpers using tools better.