Score: 0

Enhancing Diversity in Large Language Models via Determinantal Point Processes

Published: September 5, 2025 | arXiv ID: 2509.04784v1

By: Yilei Chen , Souradip Chakraborty , Lorenz Wolf and more

Potential Business Impact:

Makes AI write more creative and varied answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Supervised fine-tuning and reinforcement learning are two popular methods for post-training large language models (LLMs). While improving the model's performance on downstream tasks, they often reduce the model's output diversity, leading to narrow, canonical responses. Existing methods to enhance diversity are limited, either by operating at inference time or by focusing on lexical differences. We propose a novel training method named DQO based on determinantal point processes (DPPs) to jointly optimize LLMs for quality and semantic diversity. Our approach samples and embeds a group of responses for each prompt, then uses the determinant of a kernel-based similarity matrix to measure diversity as the volume spanned by the embeddings of these responses. Experiments across instruction-following, summarization, story generation, and reasoning tasks demonstrate that our method substantially improves semantic diversity without sacrificing model quality.

Page Count
20 pages

Category
Computer Science:
Computation and Language