Score: 1

LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation

Published: July 9, 2025 | arXiv ID: 2507.07274v1

By: Ananya Raval , Aravind Narayanan , Vahid Reza Khazaie and more

Potential Business Impact:

Tests AI's fairness across many languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Multimodal Models (LMMs) are typically trained on vast corpora of image-text data but are often limited in linguistic coverage, leading to biased and unfair outputs across languages. While prior work has explored multimodal evaluation, less emphasis has been placed on assessing multilingual capabilities. In this work, we introduce LinguaMark, a benchmark designed to evaluate state-of-the-art LMMs on a multilingual Visual Question Answering (VQA) task. Our dataset comprises 6,875 image-text pairs spanning 11 languages and five social attributes. We evaluate models using three key metrics: Bias, Answer Relevancy, and Faithfulness. Our findings reveal that closed-source models generally achieve the highest overall performance. Both closed-source (GPT-4o and Gemini2.5) and open-source models (Gemma3, Qwen2.5) perform competitively across social attributes, and Qwen2.5 demonstrates strong generalization across multiple languages. We release our benchmark and evaluation code to encourage reproducibility and further research.

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition