In-context Learning of Vision Language Models for Detection of Physical and Digital Attacks against Face Recognition Systems
By: Lazaro Janier Gonzalez-Soler, Maciej Salwowski, Christoph Busch
Potential Business Impact:
Helps face scanners spot fake faces better.
Recent advances in biometric systems have significantly improved the detection and prevention of fraudulent activities. However, as detection methods improve, attack techniques become increasingly sophisticated. Attacks on face recognition systems can be broadly divided into physical and digital approaches. Traditionally, deep learning models have been the primary defence against such attacks. While these models perform exceptionally well in scenarios for which they have been trained, they often struggle to adapt to different types of attacks or varying environmental conditions. These subsystems require substantial amounts of training data to achieve reliable performance, yet biometric data collection faces significant challenges, including privacy concerns and the logistical difficulties of capturing diverse attack scenarios under controlled conditions. This work investigates the application of Vision Language Models (VLM) and proposes an in-context learning framework for detecting physical presentation attacks and digital morphing attacks in biometric systems. Focusing on open-source models, the first systematic framework for the quantitative evaluation of VLMs in security-critical scenarios through in-context learning techniques is established. The experimental evaluation conducted on freely available databases demonstrates that the proposed subsystem achieves competitive performance for physical and digital attack detection, outperforming some of the traditional CNNs without resource-intensive training. The experimental results validate the proposed framework as a promising tool for improving generalisation in attack detection.
Similar Papers
Identity-Aware Vision-Language Model for Explainable Face Forgery Detection
Multimedia
Finds fake pictures by checking if they make sense.
Visual Language Models as Zero-Shot Deepfake Detectors
CV and Pattern Recognition
Finds fake videos better than old ways.
VIP: Visual Information Protection through Adversarial Attacks on Vision-Language Models
Image and Video Processing
Hides private parts of pictures from smart AI.