Score: 2

Deep Associations, High Creativity: A Simple yet Effective Metric for Evaluating Large Language Models

Published: October 14, 2025 | arXiv ID: 2510.12110v1

By: Ziliang Qiu, Renfen Hu

Potential Business Impact:

Tests AI's imagination like a human.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The evaluation of LLMs' creativity represents a crucial research domain, though challenges such as data contamination and costly human assessments often impede progress. Drawing inspiration from human creativity assessment, we propose PACE, asking LLMs to generate Parallel Association Chains to Evaluate their creativity. PACE minimizes the risk of data contamination and offers a straightforward, highly efficient evaluation, as evidenced by its strong correlation with Chatbot Arena Creative Writing rankings (Spearman's $\rho = 0.739$, $p < 0.001$) across various proprietary and open-source models. A comparative analysis of associative creativity between LLMs and humans reveals that while high-performing LLMs achieve scores comparable to average human performance, professional humans consistently outperform LLMs. Furthermore, linguistic analysis reveals that both humans and LLMs exhibit a trend of decreasing concreteness in their associations, and humans demonstrating a greater diversity of associative patterns.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡³ United States, China

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Computation and Language