FPBench: A Comprehensive Benchmark of Multimodal Large Language Models for Fingerprint Analysis
By: Ekta Balkrishna Gavas , Sudipta Banerjee , Chinmay Hegde and more
Multimodal LLMs (MLLMs) have gained significant traction in complex data analysis, visual question answering, generation, and reasoning. Recently, they have been used for analyzing the biometric utility of iris and face images. However, their capabilities in fingerprint understanding are yet unexplored. In this work, we design a comprehensive benchmark, \textsc{FPBench} that evaluates the performance of 20 MLLMs (open-source and proprietary) across 7 real and synthetic datasets on 8 biometric and forensic tasks using zero-shot and chain-of-thought prompting strategies. We discuss our findings in terms of performance, explainability and share our insights into the challenges and limitations. We establish \textsc{FPBench} as the first comprehensive benchmark for fingerprint domain understanding with MLLMs paving the path for foundation models for fingerprints.
Similar Papers
Benchmarking Multimodal Large Language Models for Face Recognition
CV and Pattern Recognition
Tests how computers recognize faces better.
SoK: Large Language Model Copyright Auditing via Fingerprinting
Cryptography and Security
Protects AI from being copied or stolen.
FaceXBench: Evaluating Multimodal LLMs on Face Understanding
CV and Pattern Recognition
Tests AI's ability to understand faces.