AutoGeTS: Knowledge-based Automated Generation of Text Synthetics for Improving Text Classification
By: Chenhao Xue , Yuanzhe Jin , Adrian Carrasco-Revilla and more
Potential Business Impact:
Makes AI understand text better with fake examples.
When developing text classification models for real world applications, one major challenge is the difficulty to collect sufficient data for all text classes. In this work, we address this challenge by utilizing large language models (LLMs) to generate synthetic data and using such data to improve the performance of the models without waiting for more real data to be collected and labelled. As an LLM generates different synthetic data in response to different input examples, we formulate an automated workflow, which searches for input examples that lead to more ``effective'' synthetic data for improving the model concerned. We study three search strategies with an extensive set of experiments, and use experiment results to inform an ensemble algorithm that selects a search strategy according to the characteristics of a class. Our further experiments demonstrate that this ensemble approach is more effective than each individual strategy in our automated workflow for improving classification models using LLMs.
Similar Papers
Code Review Without Borders: Evaluating Synthetic vs. Real Data for Review Recommendation
Software Engineering
Teaches computers to check new code automatically.
AutoSynth: Automated Workflow Optimization for High-Quality Synthetic Dataset Generation via Monte Carlo Tree Search
Machine Learning (CS)
Creates smart computer answers without human help.
Towards Active Synthetic Data Generation for Finetuning Language Models
Machine Learning (CS)
Teaches computers to learn better from examples.