EchoBench: Benchmarking Sycophancy in Medical Large Vision-Language Models
By: Botai Yuan , Yutian Zhou , Yingjie Wang and more
Potential Business Impact:
Tests AI doctors to stop them from agreeing too much.
Recent benchmarks for medical Large Vision-Language Models (LVLMs) emphasize leaderboard accuracy, overlooking reliability and safety. We study sycophancy -- models' tendency to uncritically echo user-provided information -- in high-stakes clinical settings. We introduce EchoBench, a benchmark to systematically evaluate sycophancy in medical LVLMs. It contains 2,122 images across 18 departments and 20 modalities with 90 prompts that simulate biased inputs from patients, medical students, and physicians. We evaluate medical-specific, open-source, and proprietary LVLMs. All exhibit substantial sycophancy; the best proprietary model (Claude 3.7 Sonnet) still shows 45.98% sycophancy, and GPT-4.1 reaches 59.15%. Many medical-specific models exceed 95% sycophancy despite only moderate accuracy. Fine-grained analyses by bias type, department, perceptual granularity, and modality identify factors that increase susceptibility. We further show that higher data quality/diversity and stronger domain knowledge reduce sycophancy without harming unbiased accuracy. EchoBench also serves as a testbed for mitigation: simple prompt-level interventions (negative prompting, one-shot, few-shot) produce consistent reductions and motivate training- and decoding-time strategies. Our findings highlight the need for robust evaluation beyond accuracy and provide actionable guidance toward safer, more trustworthy medical LVLMs.
Similar Papers
"Check My Work?": Measuring Sycophancy in a Simulated Educational Context
Computation and Language
AI agrees with students, even when wrong.
Flattery in Motion: Benchmarking and Analyzing Sycophancy in Video-LLMs
Computation and Language
Makes AI watch videos without lying.
Beacon: Single-Turn Diagnosis and Mitigation of Latent Sycophancy in Large Language Models
Computation and Language
Makes AI tell the truth, not just agree.