Many-to-One Adversarial Consensus: Exposing Multi-Agent Collusion Risks in AI-Based Healthcare
By: Adeela Bashir, The Anh han, Zia Ush Shamszaman
Potential Business Impact:
Keeps AI doctors from giving bad medicine.
The integration of large language models (LLMs) into healthcare IoT systems promises faster decisions and improved medical support. LLMs are also deployed as multi-agent teams to assist AI doctors by debating, voting, or advising on decisions. However, when multiple assistant agents interact, coordinated adversaries can collude to create false consensus, pushing an AI doctor toward harmful prescriptions. We develop an experimental framework with scripted and unscripted doctor agents, adversarial assistants, and a verifier agent that checks decisions against clinical guidelines. Using 50 representative clinical questions, we find that collusion drives the Attack Success Rate (ASR) and Harmful Recommendation Rates (HRR) up to 100% in unprotected systems. In contrast, the verifier agent restores 100% accuracy by blocking adversarial consensus. This work provides the first systematic evidence of collusion risk in AI healthcare and demonstrates a practical, lightweight defence that ensures guideline fidelity.
Similar Papers
Emerging Cyber Attack Risks of Medical AI Agents
Cryptography and Security
AI doctors can be tricked into sharing secrets.
Balancing Safety and Helpfulness in Healthcare AI Assistants through Iterative Preference Alignment
Artificial Intelligence
Makes AI doctors safer by catching bad advice.
Multi-LLM Collaboration for Medication Recommendation
Machine Learning (CS)
Makes AI doctors give safer medicine advice.