Score: 1

Steering Language Models in Multi-Token Generation: A Case Study on Tense and Aspect

Published: September 15, 2025 | arXiv ID: 2509.12065v1

By: Alina Klerings , Jannik Brinkmann , Daniel Ruffinelli and more

Potential Business Impact:

Teaches computers to use verb tenses correctly.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) are able to generate grammatically well-formed text, but how do they encode their syntactic knowledge internally? While prior work has focused largely on binary grammatical contrasts, in this work, we study the representation and control of two multidimensional hierarchical grammar phenomena - verb tense and aspect - and for each, identify distinct, orthogonal directions in residual space using linear discriminant analysis. Next, we demonstrate causal control over both grammatical features through concept steering across three generation tasks. Then, we use these identified features in a case study to investigate factors influencing effective steering in multi-token generation. We find that steering strength, location, and duration are crucial parameters for reducing undesirable side effects such as topic shift and degeneration. Our findings suggest that models encode tense and aspect in structurally organized, human-like ways, but effective control of such features during generation is sensitive to multiple factors and requires manual tuning or automated optimization.

Country of Origin
🇩🇪 Germany

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Computation and Language