Unveiling LLMs' Metaphorical Understanding: Exploring Conceptual Irrelevance, Context Leveraging and Syntactic Influence
By: Fengying Ye , Shanshan Wang , Lidia S. Chao and more
Potential Business Impact:
Computers still struggle to understand word pictures.
Metaphor analysis is a complex linguistic phenomenon shaped by context and external factors. While Large Language Models (LLMs) demonstrate advanced capabilities in knowledge integration, contextual reasoning, and creative generation, their mechanisms for metaphor comprehension remain insufficiently explored. This study examines LLMs' metaphor-processing abilities from three perspectives: (1) Concept Mapping: using embedding space projections to evaluate how LLMs map concepts in target domains (e.g., misinterpreting "fall in love" as "drop down from love"); (2) Metaphor-Literal Repository: analyzing metaphorical words and their literal counterparts to identify inherent metaphorical knowledge; and (3) Syntactic Sensitivity: assessing how metaphorical syntactic structures influence LLMs' performance. Our findings reveal that LLMs generate 15\%-25\% conceptually irrelevant interpretations, depend on metaphorical indicators in training data rather than contextual cues, and are more sensitive to syntactic irregularities than to structural comprehension. These insights underline the limitations of LLMs in metaphor analysis and call for more robust computational approaches.
Similar Papers
LLMs Know More Than Words: A Genre Study with Syntax, Metaphor & Phonetics
Computation and Language
Helps computers understand poetry and stories better.
How Deep is Love in LLMs' Hearts? Exploring Semantic Size in Human-like Cognition
Computation and Language
Helps computers understand word size like people.
Semantic Mastery: Enhancing LLMs with Advanced Natural Language Understanding
Computation and Language
Makes AI understand and talk like people.