Score: 0

Modular Techniques for Synthetic Long-Context Data Generation in Language Model Training and Evaluation

Published: September 1, 2025 | arXiv ID: 2509.01185v2

By: Seganrasan Subramanian, Abhigya Verma

Potential Business Impact:

Makes AI understand and remember long stories.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The ability of large language models (LLMs) to process and reason over long textual inputs is critical for a wide range of real-world applications. However, progress in this area is significantly constrained by the absence of high-quality, diverse, and verifiable long-context datasets suitable for both training and evaluation. This work introduces a modular, extensible framework for synthetic long-context data generation via prompt-based interaction with LLMs. The framework supports multiple training and alignment objectives, including Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Group Relative Policy Optimization (GRPO). It encompasses four core generation paradigms: multi-turn conversational dialogues, document-grounded input-output pairs, verifiable instruction-response tasks, and long-context reasoning examples. Through templated prompting, a model-agnostic architecture, and metadata-enriched outputs, the proposed approach facilitates scalable, controllable, and purpose-aligned dataset creation for advancing long-context capabilities in LLMs.

Page Count
26 pages

Category
Computer Science:
Computation and Language