Neologism Learning for Controllability and Self-Verbalization
By: John Hewitt , Oyvind Tafjord , Robert Geirhos and more
Potential Business Impact:
Teaches computers new words to control their answers.
Humans invent new words when there is a rising demand for a new useful concept (e.g., doomscrolling). We explore and validate a similar idea in our communication with LLMs: introducing new words to better understand and control the models, expanding on the recently introduced neologism learning. This method introduces a new word by adding a new word embedding and training with examples that exhibit the concept with no other changes in model parameters. We show that adding a new word allows for control of concepts such as flattery, incorrect answers, text length, as well as more complex concepts in AxBench. We discover that neologisms can also further our understanding of the model via self-verbalization: models can describe what each new word means to them in natural language, like explaining that a word that represents a concept of incorrect answers means ``a lack of complete, coherent, or meaningful answers...'' To validate self-verbalizations, we introduce plug-in evaluation: we insert the verbalization into the context of a model and measure whether it controls the target concept. In some self-verbalizations, we find machine-only synonyms: words that seem unrelated to humans but cause similar behavior in machines. Finally, we show how neologism learning can jointly learn multiple concepts in multiple words.
Similar Papers
Neologism Learning as a Parameter-Efficient Alternative to Fine-Tuning for Model Steering
Computation and Language
Teaches computers new words to follow instructions better.
Human-like conceptual representations emerge from language prediction
Computation and Language
Computers learn ideas like people from words.
Uncovering Gaps in How Humans and LLMs Interpret Subjective Language
Computation and Language
Finds when AI writes wrong things by mistake.