Controlling Language Difficulty in Dialogues with Linguistic Features
By: Shuyao Xu , Wenguang Wang , Handong Gao and more
Potential Business Impact:
Teaches language learners at their own level.
Large language models (LLMs) have emerged as powerful tools for supporting second language acquisition, particularly in simulating interactive dialogues for speaking practice. However, adapting the language difficulty of LLM-generated responses to match learners' proficiency levels remains a challenge. This work addresses this issue by proposing a framework for controlling language proficiency in educational dialogue systems. Our approach leverages three categories of linguistic features, readability features (e.g., Flesch-Kincaid Grade Level), syntactic features (e.g., syntactic tree depth), and lexical features (e.g., simple word ratio), to quantify and regulate text complexity. We demonstrate that training LLMs on linguistically annotated dialogue data enables precise modulation of language proficiency, outperforming prompt-based methods in both flexibility and stability. To evaluate this, we introduce Dilaprix, a novel metric integrating the aforementioned features, which shows strong correlation with expert judgments of language difficulty. Empirical results reveal that our approach achieves superior controllability of language proficiency while maintaining high dialogue quality.
Similar Papers
Controlling Difficulty of Generated Text for AI-Assisted Language Learning
Computation and Language
Helps AI teach beginners new languages easily.
Towards Ontology-Based Descriptions of Conversations with Qualitatively-Defined Concepts
Artificial Intelligence
Makes AI talk at your exact skill level.
Can LLMs Generate High-Quality Task-Specific Conversations?
Computation and Language
Makes chatbots talk better and more useful.