Score: 1

CogBench: A Large Language Model Benchmark for Multilingual Speech-Based Cognitive Impairment Assessment

Published: August 5, 2025 | arXiv ID: 2508.03360v1

By: Feng Rui , Zhiyao Luo , Wei Wang and more

Potential Business Impact:

Helps computers find memory problems from talking.

Automatic assessment of cognitive impairment from spontaneous speech offers a promising, non-invasive avenue for early cognitive screening. However, current approaches often lack generalizability when deployed across different languages and clinical settings, limiting their practical utility. In this study, we propose CogBench, the first benchmark designed to evaluate the cross-lingual and cross-site generalizability of large language models (LLMs) for speech-based cognitive impairment assessment. Using a unified multimodal pipeline, we evaluate model performance on three speech datasets spanning English and Mandarin: ADReSSo, NCMMSC2021-AD, and a newly collected test set, CIR-E. Our results show that conventional deep learning models degrade substantially when transferred across domains. In contrast, LLMs equipped with chain-of-thought prompting demonstrate better adaptability, though their performance remains sensitive to prompt design. Furthermore, we explore lightweight fine-tuning of LLMs via Low-Rank Adaptation (LoRA), which significantly improves generalization in target domains. These findings offer a critical step toward building clinically useful and linguistically robust speech-based cognitive assessment tools.

Country of Origin
🇬🇧 🇨🇳 China, United Kingdom

Page Count
19 pages

Category
Computer Science:
Artificial Intelligence