Step-by-step Instructions and a Simple Tabular Output Format Improve the Dependency Parsing Accuracy of LLMs
By: Hiroshi Matsuda, Chunpeng Ma, Masayuki Asahara
Potential Business Impact:
Helps computers understand sentences perfectly.
Recent advances in large language models (LLMs) have enabled impressive performance in various tasks. However, standard prompting often struggles to produce structurally valid and accurate outputs, especially in dependency parsing. We propose a novel step-by-step instruction strategy, where universal part-of-speech tagging precedes the prediction of syntactic heads and dependency labels, and a simplified CoNLL-U like output format, our method achieves state-of-the-art accuracy on Universal Dependencies datasets across 17 languages without hallucination or contamination. We further show that multilingual fine-tuning simultaneously improves cross-language generalization performance. Our results highlight the effectiveness of explicit reasoning steps in LLM-based parsing and offer a scalable, format-consistent alternative to bracket-based approaches.
Similar Papers
Self-Correction Makes LLMs Better Parsers
Computation and Language
Teaches computers to understand sentences better.
Improving the Accuracy and Efficiency of Legal Document Tagging with Large Language Models and Instruction Prompts
Computation and Language
Helps lawyers sort legal papers faster.
A Note on Statistically Accurate Tabular Data Generation Using Large Language Models
Machine Learning (CS)
Makes fake computer data more like real data.