VOYAGER: A Training Free Approach for Generating Diverse Datasets using LLMs
By: Avinash Amballa , Yashas Malur Saidutta , Chi-Heng Lin and more
Potential Business Impact:
Makes computer-made data more varied and useful.
Large language models (LLMs) are increasingly being used to generate synthetic datasets for the evaluation and training of downstream models. However, prior work has noted that such generated data lacks diversity. In this paper, we propose Voyager, a novel principled approach to generate diverse datasets. Our approach is iterative and directly optimizes a mathematical quantity that optimizes the diversity of the dataset using the machinery of determinantal point processes. Furthermore, our approach is training-free, applicable to closed-source models, and scalable. In addition to providing theoretical justification for the working of our method, we also demonstrate through comprehensive experiments that Voyager significantly outperforms popular baseline approaches by providing a 1.5-3x improvement in diversity.
Similar Papers
Automata-Based Steering of Large Language Models for Diverse Structured Generation
Computation and Language
Creates more varied computer-generated text.
Beyond Quality: Unlocking Diversity in Ad Headline Generation with Large Language Models
Computation and Language
Makes ads show different, better headlines.
Selective Expert Guidance for Effective and Diverse Exploration in Reinforcement Learning of LLMs
Artificial Intelligence
Teaches AI to think better by guiding key choices.