SNS-Bench-VL: Benchmarking Multimodal Large Language Models in Social Networking Services
By: Hongcheng Guo , Zheyong Xie , Shaosheng Cao and more
Potential Business Impact:
Tests AI on social media pictures and words.
With the increasing integration of visual and textual content in Social Networking Services (SNS), evaluating the multimodal capabilities of Large Language Models (LLMs) is crucial for enhancing user experience, content understanding, and platform intelligence. Existing benchmarks primarily focus on text-centric tasks, lacking coverage of the multimodal contexts prevalent in modern SNS ecosystems. In this paper, we introduce SNS-Bench-VL, a comprehensive multimodal benchmark designed to assess the performance of Vision-Language LLMs in real-world social media scenarios. SNS-Bench-VL incorporates images and text across 8 multimodal tasks, including note comprehension, user engagement analysis, information retrieval, and personalized recommendation. It comprises 4,001 carefully curated multimodal question-answer pairs, covering single-choice, multiple-choice, and open-ended tasks. We evaluate over 25 state-of-the-art multimodal LLMs, analyzing their performance across tasks. Our findings highlight persistent challenges in multimodal social context comprehension. We hope SNS-Bench-VL will inspire future research towards robust, context-aware, and human-aligned multimodal intelligence for next-generation social networking services.
Similar Papers
HSSBench: Benchmarking Humanities and Social Sciences Ability for Multimodal Large Language Models
Computation and Language
Helps computers understand history and art better.
Exploring Vision Language Models for Multimodal and Multilingual Stance Detection
Computation and Language
Helps computers understand opinions in pictures and words.
VS-Bench: Evaluating VLMs for Strategic Reasoning and Decision-Making in Multi-Agent Environments
Artificial Intelligence
Helps AI agents play games with each other.