Easy Dataset: A Unified and Extensible Framework for Synthesizing LLM Fine-Tuning Data from Unstructured Documents
By: Ziyang Miao , Qiyu Sun , Jingyuan Wang and more
Potential Business Impact:
Makes smart computers learn new jobs easily.
Large language models (LLMs) have shown impressive performance on general-purpose tasks, yet adapting them to specific domains remains challenging due to the scarcity of high-quality domain data. Existing data synthesis tools often struggle to extract reliable fine-tuning data from heterogeneous documents effectively. To address this limitation, we propose Easy Dataset, a unified framework for synthesizing fine-tuning data from unstructured documents via an intuitive graphical user interface (GUI). Specifically, Easy Dataset allows users to easily configure text extraction models and chunking strategies to transform raw documents into coherent text chunks. It then leverages a persona-driven prompting approach to generate diverse question-answer pairs using public-available LLMs. Throughout the pipeline, a human-in-the-loop visual interface facilitates the review and refinement of intermediate outputs to ensure data quality. Experiments on a financial question-answering task show that fine-tuning LLMs on the synthesized dataset significantly improves domain-specific performance while preserving general knowledge. The source code and installable package are available at https://github.com/ConardLi/easy-dataset and have garnered over 9,000 GitHub stars.
Similar Papers
Large Language Models and Synthetic Data for Monitoring Dataset Mentions in Research Papers
Computation and Language
Finds where research data is used automatically.
Bridging the Editing Gap in LLMs: FineEdit for Precise and Targeted Text Modifications
Computation and Language
Teaches computers to fix text perfectly.
Rethinking Data: Towards Better Performing Domain-Specific Small Language Models
Computation and Language
Makes small AI models answer questions as well as big ones.