Black-Box Membership Inference Attack for LVLMs via Prior Knowledge-Calibrated Memory Probing
By: Jinhua Yin , Peiru Yang , Chen Yang and more
Potential Business Impact:
Uncovers if AI saw your private pictures.
Large vision-language models (LVLMs) derive their capabilities from extensive training on vast corpora of visual and textual data. Empowered by large-scale parameters, these models often exhibit strong memorization of their training data, rendering them susceptible to membership inference attacks (MIAs). Existing MIA methods for LVLMs typically operate under white- or gray-box assumptions, by extracting likelihood-based features for the suspected data samples based on the target LVLMs. However, mainstream LVLMs generally only expose generated outputs while concealing internal computational features during inference, limiting the applicability of these methods. In this work, we propose the first black-box MIA framework for LVLMs, based on a prior knowledge-calibrated memory probing mechanism. The core idea is to assess the model memorization of the private semantic information embedded within the suspected image data, which is unlikely to be inferred from general world knowledge alone. We conducted extensive experiments across four LVLMs and three datasets. Empirical results demonstrate that our method effectively identifies training data of LVLMs in a purely black-box setting and even achieves performance comparable to gray-box and white-box methods. Further analysis reveals the robustness of our method against potential adversarial manipulations, and the effectiveness of the methodology designs. Our code and data are available at https://github.com/spmede/KCMP.
Similar Papers
OpenLVLM-MIA: A Controlled Benchmark Revealing the Limits of Membership Inference Attacks on Large Vision-Language Models
CV and Pattern Recognition
Finds if AI remembers private training pictures.
Image Corruption-Inspired Membership Inference Attacks against Large Vision-Language Models
CV and Pattern Recognition
Finds if your pictures trained AI.
Exposing and Defending Membership Leakage in Vulnerability Prediction Models
Cryptography and Security
Protects code-writing AI from spying on its training data.