Capabilities and Evaluation Biases of Large Language Models in Classical Chinese Poetry Generation: A Case Study on Tang Poetry
By: Bolei Ma, Yina Yao, Anna-Carolina Haensch
Potential Business Impact:
Computers write poems, but humans must check them.
Large Language Models (LLMs) are increasingly applied to creative domains, yet their performance in classical Chinese poetry generation and evaluation remains poorly understood. We propose a three-step evaluation framework that combines computational metrics, LLM-as-a-judge assessment, and human expert validation. Using this framework, we evaluate six state-of-the-art LLMs across multiple dimensions of poetic quality, including themes, emotions, imagery, form, and style. Our analysis reveals systematic generation and evaluation biases: LLMs exhibit "echo chamber" effects when assessing creative quality, often converging on flawed standards that diverge from human judgments. These findings highlight both the potential and limitations of current capabilities of LLMs as proxy for literacy generation and the limited evaluation practices, thereby demonstrating the continued need of hybrid validation from both humans and models in culturally and technically complex creative tasks.
Similar Papers
Benchmarking the Detection of LLMs-Generated Modern Chinese Poetry
Computation and Language
Finds if AI wrote Chinese poems.
Evaluating the Creativity of LLMs in Persian Literary Text Generation
Computation and Language
Computers write creative Persian stories.
The Paradox of Poetic Intent in Back-Translation: Evaluating the Quality of Large Language Models in Chinese Translation
Computation and Language
AI translates Chinese poetry and science better.