Score: 0

Neologism Learning as a Parameter-Efficient Alternative to Fine-Tuning for Model Steering

Published: December 21, 2025 | arXiv ID: 2512.18551v1

By: Sungjoon Park, Varun Ramamurthi, Owen Terry

Potential Business Impact:

Teaches computers new words to follow instructions better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In language modeling, neologisms are new tokens trained to represent a concept not already included in a given model's vocabulary. Neologisms can be used to encourage specific behavior in models, for example by appending prompts with "Give me a neologism answer." Behavioral steering can also be achieved through fine-tuning, albeit with more compute and less flexibility: learning a neologism only trains d parameters and allows the user to still access the model's default behavior. We compare the performance of neologism learning against low-rank adaptation (LoRA) fine-tuning, finding that neologisms outperform fine-tuned models under a matched training setup (same data and hyperparameters). We also investigate self-verbalizations of neologisms, and observe that the model will occasionally make up its own new words when asked about a neologism.

Country of Origin
🇺🇸 United States

Page Count
14 pages

Category
Computer Science:
Computation and Language