Attributes as Textual Genes: Leveraging LLMs as Genetic Algorithm Simulators for Conditional Synthetic Data Generation
By: Guangzeng Han, Weisi Liu, Xiaolei Huang
Potential Business Impact:
Makes computer-made text more like real text.
Large Language Models (LLMs) excel at generating synthetic data, but ensuring its quality and diversity remains challenging. We propose Genetic Prompt, a novel framework that combines genetic algorithms with LLMs to augment synthetic data generation. Our approach treats semantic text attributes as gene sequences and leverages the LLM to simulate crossover and mutation operations. This genetic process enhances data quality and diversity by creating novel attribute combinations, yielding synthetic distributions closer to real-world data. To optimize parent selection, we also integrate an active learning scheme that expands the offspring search space. Our experiments on multiple NLP tasks reveal several key findings: Genetic Prompt not only significantly outperforms state-of-the-art baselines but also shows robust performance across various generator model sizes and scales. Moreover, we demonstrate that fusing our synthetic data with the original training set significantly boosts downstream model performance, particularly for class-imbalanced scenarios. Our findings validate that Genetic Prompt is an effective method for producing high-quality synthetic data for a wide range of NLP applications.
Similar Papers
Synthetic Data Generation Using Large Language Models: Advances in Text and Code
Computation and Language
Creates fake data to train AI faster.
MultiGA: Leveraging Multi-Source Seeding in Genetic Algorithms
Neural and Evolutionary Computing
Combines many AI minds to solve hard problems.
Biological Sequence with Language Model Prompting: A Survey
Computation and Language
Helps computers design new medicines faster.