Modifying Large Language Model Post-Training for Diverse Creative Writing
By: John Joon Young Chung , Vishakh Padmakumar , Melissa Roemmele and more
Potential Business Impact:
Makes AI write more creative and different stories.
As creative writing tasks do not have singular correct answers, large language models (LLMs) trained to perform these tasks should be able to generate diverse valid outputs. However, LLM post-training often focuses on improving generation quality but neglects to facilitate output diversity. Hence, in creative writing generation, we investigate post-training approaches to promote both output diversity and quality. Our core idea is to include deviation -- the degree of difference between a training sample and all other samples with the same prompt -- in the training objective to facilitate learning from rare high-quality instances. By adopting our approach to direct preference optimization (DPO) and odds ratio preference optimization (ORPO), we demonstrate that we can promote the output diversity of trained models while minimally decreasing quality. Our best model with 8B parameters could achieve on-par diversity as a human-created dataset while having output quality similar to the best instruction-tuned models we examined, GPT-4o and DeepSeek-R1. We further validate our approaches with a human evaluation, an ablation, and a comparison to an existing diversification approach, DivPO.
Similar Papers
Evaluating the Diversity and Quality of LLM Generated Content
Computation and Language
Makes AI write more creative and useful things.
Beyond Single: A Data Selection Principle for LLM Alignment via Fine-Grained Preference Signals
Machine Learning (CS)
Teaches AI to follow many different rules better.
Enhancing Diversity in Large Language Models via Determinantal Point Processes
Computation and Language
Makes AI write more creative and varied answers.