Deep Associations, High Creativity: A Simple yet Effective Metric for Evaluating Large Language Models
By: Ziliang Qiu, Renfen Hu
Potential Business Impact:
Tests AI's imagination like a human.
The evaluation of LLMs' creativity represents a crucial research domain, though challenges such as data contamination and costly human assessments often impede progress. Drawing inspiration from human creativity assessment, we propose PACE, asking LLMs to generate Parallel Association Chains to Evaluate their creativity. PACE minimizes the risk of data contamination and offers a straightforward, highly efficient evaluation, as evidenced by its strong correlation with Chatbot Arena Creative Writing rankings (Spearman's $\rho = 0.739$, $p < 0.001$) across various proprietary and open-source models. A comparative analysis of associative creativity between LLMs and humans reveals that while high-performing LLMs achieve scores comparable to average human performance, professional humans consistently outperform LLMs. Furthermore, linguistic analysis reveals that both humans and LLMs exhibit a trend of decreasing concreteness in their associations, and humans demonstrating a greater diversity of associative patterns.
Similar Papers
Has the Creativity of Large-Language Models peaked? An analysis of inter- and intra-LLM variability
Computation and Language
Computers aren't getting more creative, even the best ones.
Style Over Story: A Process-Oriented Study of Authorial Creativity in Large Language Models
Computation and Language
AI writing tools prefer style over story.
Rethinking Creativity Evaluation: A Critical Analysis of Existing Creativity Evaluations
Computation and Language
Helps computers judge creative ideas more like humans.