Multi-TW: Benchmarking Multimodal Models on Traditional Chinese Question Answering in Taiwan
By: Jui-Ming Yao , Bing-Cheng Xie , Sheng-Wei Peng and more
Potential Business Impact:
Helps computers understand Chinese pictures, sounds, and words.
Multimodal Large Language Models (MLLMs) process visual, acoustic, and textual inputs, addressing the limitations of single-modality LLMs. However, existing benchmarks often overlook tri-modal evaluation in Traditional Chinese and do not consider inference latency. To address this, we introduce Multi-TW, the first Traditional Chinese benchmark for evaluating the performance and latency of any-to-any multimodal models. Multi-TW includes 900 multiple-choice questions (image and text, audio and text pairs) sourced from official proficiency tests developed with the Steering Committee for the Test of Proficiency-Huayu (SC-TOP). We evaluated various any-to-any models and vision-language models (VLMs) with audio transcription. Our results show that closed-source models generally outperform open-source ones across modalities, although open-source models can perform well in audio tasks. End-to-end any-to-any pipelines offer clear latency advantages compared to VLMs using separate audio transcription. Multi-TW presents a comprehensive view of model capabilities and highlights the need for Traditional Chinese fine-tuning and efficient multimodal architectures.
Similar Papers
VisTW: Benchmarking Vision-Language Models for Traditional Chinese in Taiwan
Computation and Language
Tests computers understanding Chinese pictures and words.
TCC-Bench: Benchmarking the Traditional Chinese Culture Understanding Capabilities of MLLMs
Multimedia
Helps AI understand Chinese culture in pictures.
Multimodal Evaluation of Russian-language Architectures
Computation and Language
Tests AI that understands Russian pictures, sounds, and words.