A Comparison of Human and ChatGPT Classification Performance on Complex Social Media Data
By: Breanna E. Green , Ashley L. Shea , Pengfei Zhao and more
Potential Business Impact:
AI struggles to understand tricky words.
Generative artificial intelligence tools, like ChatGPT, are an increasingly utilized resource among computational social scientists. Nevertheless, there remains space for improved understanding of the performance of ChatGPT in complex tasks such as classifying and annotating datasets containing nuanced language. Method. In this paper, we measure the performance of GPT-4 on one such task and compare results to human annotators. We investigate ChatGPT versions 3.5, 4, and 4o to examine performance given rapid changes in technological advancement of large language models. We craft four prompt styles as input and evaluate precision, recall, and F1 scores. Both quantitative and qualitative evaluations of results demonstrate that while including label definitions in prompts may help performance, overall GPT-4 has difficulty classifying nuanced language. Qualitative analysis reveals four specific findings. Our results suggest the use of ChatGPT in classification tasks involving nuanced language should be conducted with prudence.
Similar Papers
Understanding Why ChatGPT Outperforms Humans in Visualization Design Advice
Human-Computer Interaction
AI understands pictures better than people.
ChatGPT as a Translation Engine: A Case Study on Japanese-English
Computation and Language
Makes computers translate Japanese to English better.
Capabilities of GPT-5 across critical domains: Is it the next breakthrough?
Human-Computer Interaction
New AI helps doctors diagnose better.