Enhancing Diversity in Large Language Models via Determinantal Point Processes
By: Yilei Chen , Souradip Chakraborty , Lorenz Wolf and more
Potential Business Impact:
Makes AI write more creative and varied answers.
Supervised fine-tuning and reinforcement learning are two popular methods for post-training large language models (LLMs). While improving the model's performance on downstream tasks, they often reduce the model's output diversity, leading to narrow, canonical responses. Existing methods to enhance diversity are limited, either by operating at inference time or by focusing on lexical differences. We propose a novel training method named DQO based on determinantal point processes (DPPs) to jointly optimize LLMs for quality and semantic diversity. Our approach samples and embeds a group of responses for each prompt, then uses the determinant of a kernel-based similarity matrix to measure diversity as the volume spanned by the embeddings of these responses. Experiments across instruction-following, summarization, story generation, and reasoning tasks demonstrate that our method substantially improves semantic diversity without sacrificing model quality.
Similar Papers
Diversified recommendations of cultural activities with personalized determinantal point processes
Information Retrieval
Shows you more interesting, different things you like.
Modifying Large Language Model Post-Training for Diverse Creative Writing
Computation and Language
Makes AI write more creative and different stories.
Evaluating the Diversity and Quality of LLM Generated Content
Computation and Language
Makes AI write more creative and useful things.