Score: 3

Modifying Large Language Model Post-Training for Diverse Creative Writing

Published: March 21, 2025 | arXiv ID: 2503.17126v1

By: John Joon Young Chung , Vishakh Padmakumar , Melissa Roemmele and more

BigTech Affiliations: MidJourney

Potential Business Impact:

Makes AI write more creative and different stories.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As creative writing tasks do not have singular correct answers, large language models (LLMs) trained to perform these tasks should be able to generate diverse valid outputs. However, LLM post-training often focuses on improving generation quality but neglects to facilitate output diversity. Hence, in creative writing generation, we investigate post-training approaches to promote both output diversity and quality. Our core idea is to include deviation -- the degree of difference between a training sample and all other samples with the same prompt -- in the training objective to facilitate learning from rare high-quality instances. By adopting our approach to direct preference optimization (DPO) and odds ratio preference optimization (ORPO), we demonstrate that we can promote the output diversity of trained models while minimally decreasing quality. Our best model with 8B parameters could achieve on-par diversity as a human-created dataset while having output quality similar to the best instruction-tuned models we examined, GPT-4o and DeepSeek-R1. We further validate our approaches with a human evaluation, an ablation, and a comparison to an existing diversification approach, DivPO.

Country of Origin
🇺🇸 United States


Page Count
25 pages

Category
Computer Science:
Computation and Language