LLMs Know More Than Words: A Genre Study with Syntax, Metaphor & Phonetics
By: Weiye Shi , Zhaowei Zhang , Shaoheng Yan and more
Potential Business Impact:
Helps computers understand poetry and stories better.
Large language models (LLMs) demonstrate remarkable potential across diverse language related tasks, yet whether they capture deeper linguistic properties, such as syntactic structure, phonetic cues, and metrical patterns from raw text remains unclear. To analysis whether LLMs can learn these features effectively and apply them to important nature language related tasks, we introduce a novel multilingual genre classification dataset derived from Project Gutenberg, a large-scale digital library offering free access to thousands of public domain literary works, comprising thousands of sentences per binary task (poetry vs. novel;drama vs. poetry;drama vs. novel) in six languages (English, French, German, Italian, Spanish, and Portuguese). We augment each with three explicit linguistic feature sets (syntactic tree structures, metaphor counts, and phonetic metrics) to evaluate their impact on classification performance. Experiments demonstrate that although LLM classifiers can learn latent linguistic structures either from raw text or from explicitly provided features, different features contribute unevenly across tasks, which underscores the importance of incorporating more complex linguistic signals during model training.
Similar Papers
Large Language Models' Internal Perception of Symbolic Music
Computation and Language
Computers learn music from text descriptions.
Cross-Task Benchmarking and Evaluation of General-Purpose and Code-Specific Large Language Models
Software Engineering
Makes computers better at understanding language and code.
Are the LLMs Capable of Maintaining at Least the Language Genus?
Computation and Language
Computers understand languages better when they're related.