Linguistic Blind Spots of Large Language Models
By: Jiali Cheng, Hadi Amiri
Potential Business Impact:
AI struggles to understand sentence parts.
Large language models (LLMs) are the foundation of many AI applications today. However, despite their remarkable proficiency in generating coherent text, questions linger regarding their ability to perform fine-grained linguistic annotation tasks, such as detecting nouns or verbs, or identifying more complex syntactic structures like clauses in input texts. These tasks require precise syntactic and semantic understanding of input text, and when LLMs underperform on specific linguistic structures, it raises concerns about their reliability for detailed linguistic analysis and whether their (even correct) outputs truly reflect an understanding of the inputs. In this paper, we empirically study the performance of recent LLMs on fine-grained linguistic annotation tasks. Through a series of experiments, we find that recent LLMs show limited efficacy in addressing linguistic queries and often struggle with linguistically complex inputs. We show that the most capable LLM (Llama3-70b) makes notable errors in detecting linguistic structures, such as misidentifying embedded clauses, failing to recognize verb phrases, and confusing complex nominals with clauses. Our results provide insights to inform future advancements in LLM design and development.
Similar Papers
Syntactic Blind Spots: How Misalignment Leads to LLMs Mathematical Errors
Computation and Language
Fixes math problems by changing how they're asked.
Large Language Models as Span Annotators
Computation and Language
Computers can now find and label text parts.
Semantic Mastery: Enhancing LLMs with Advanced Natural Language Understanding
Computation and Language
Makes AI understand and talk like people.