HKCanto-Eval: A Benchmark for Evaluating Cantonese Language Understanding and Cultural Comprehension in LLMs
By: Tsz Chung Cheng , Chung Shing Cheng , Chaak Ming Lau and more
Potential Business Impact:
Tests how well computers understand Hong Kong's language.
The ability of language models to comprehend and interact in diverse linguistic and cultural landscapes is crucial. The Cantonese language used in Hong Kong presents unique challenges for natural language processing due to its rich cultural nuances and lack of dedicated evaluation datasets. The HKCanto-Eval benchmark addresses this gap by evaluating the performance of large language models (LLMs) on Cantonese language understanding tasks, extending to English and Written Chinese for cross-lingual evaluation. HKCanto-Eval integrates cultural and linguistic nuances intrinsic to Hong Kong, providing a robust framework for assessing language models in realistic scenarios. Additionally, the benchmark includes questions designed to tap into the underlying linguistic metaknowledge of the models. Our findings indicate that while proprietary models generally outperform open-weight models, significant limitations remain in handling Cantonese-specific linguistic and cultural knowledge, highlighting the need for more targeted training data and evaluation methods. The code can be accessed at https://github.com/hon9kon9ize/hkeval2025
Similar Papers
\textsc{CantoNLU}: A benchmark for Cantonese natural language understanding
Computation and Language
Helps computers understand Cantonese language better.
Measuring Hong Kong Massive Multi-Task Language Understanding
Computation and Language
Helps AI understand Hong Kong's language and culture.
Developing and Utilizing a Large-Scale Cantonese Dataset for Multi-Tasking in Large Language Models
Computation and Language
Teaches computers to understand Cantonese better.