Large Language Models for Subjective Language Understanding: A Survey
By: Changhao Song , Yazhou Zhang , Hui Gao and more
Potential Business Impact:
Helps computers understand feelings, opinions, and jokes.
Subjective language understanding refers to a broad set of natural language processing tasks where the goal is to interpret or generate content that conveys personal feelings, opinions, or figurative meanings rather than objective facts. With the advent of large language models (LLMs) such as ChatGPT, LLaMA, and others, there has been a paradigm shift in how we approach these inherently nuanced tasks. In this survey, we provide a comprehensive review of recent advances in applying LLMs to subjective language tasks, including sentiment analysis, emotion recognition, sarcasm detection, humor understanding, stance detection, metaphor interpretation, intent detection, and aesthetics assessment. We begin by clarifying the definition of subjective language from linguistic and cognitive perspectives, and we outline the unique challenges posed by subjective language (e.g. ambiguity, figurativeness, context dependence). We then survey the evolution of LLM architectures and techniques that particularly benefit subjectivity tasks, highlighting why LLMs are well-suited to model subtle human-like judgments. For each of the eight tasks, we summarize task definitions, key datasets, state-of-the-art LLM-based methods, and remaining challenges. We provide comparative insights, discussing commonalities and differences among tasks and how multi-task LLM approaches might yield unified models of subjectivity. Finally, we identify open issues such as data limitations, model bias, and ethical considerations, and suggest future research directions. We hope this survey will serve as a valuable resource for researchers and practitioners interested in the intersection of affective computing, figurative language processing, and large-scale language models.
Similar Papers
Exploring Subjective Tasks in Farsi: A Survey Analysis and Evaluation of Language Models
Computation and Language
Helps computers understand Farsi feelings and opinions better.
Uncovering Gaps in How Humans and LLMs Interpret Subjective Language
Computation and Language
Finds when AI writes wrong things by mistake.
Objective Metrics for Evaluating Large Language Models Using External Data Sources
Computation and Language
Tests computer smarts fairly and without bias.