Score: 2

Multi-TW: Benchmarking Multimodal Models on Traditional Chinese Question Answering in Taiwan

Published: August 2, 2025 | arXiv ID: 2508.01274v1

By: Jui-Ming Yao , Bing-Cheng Xie , Sheng-Wei Peng and more

Potential Business Impact:

Helps computers understand Chinese pictures, sounds, and words.

Multimodal Large Language Models (MLLMs) process visual, acoustic, and textual inputs, addressing the limitations of single-modality LLMs. However, existing benchmarks often overlook tri-modal evaluation in Traditional Chinese and do not consider inference latency. To address this, we introduce Multi-TW, the first Traditional Chinese benchmark for evaluating the performance and latency of any-to-any multimodal models. Multi-TW includes 900 multiple-choice questions (image and text, audio and text pairs) sourced from official proficiency tests developed with the Steering Committee for the Test of Proficiency-Huayu (SC-TOP). We evaluated various any-to-any models and vision-language models (VLMs) with audio transcription. Our results show that closed-source models generally outperform open-source ones across modalities, although open-source models can perform well in audio tasks. End-to-end any-to-any pipelines offer clear latency advantages compared to VLMs using separate audio transcription. Multi-TW presents a comprehensive view of model capabilities and highlights the need for Traditional Chinese fine-tuning and efficient multimodal architectures.

Country of Origin
🇬🇧 🇹🇼 Taiwan, Province of China, United Kingdom

Page Count
8 pages

Category
Computer Science:
Artificial Intelligence