Score: 0

Learning to Rewrite Prompts for Bootstrapping LLMs on Downstream Tasks

Published: October 8, 2025 | arXiv ID: 2510.06695v1

By: Qinhao Zhou , Xiang Xiang , Kun He and more

Potential Business Impact:

Improves computer translation by focusing on input.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In recent years, the growing interest in Large Language Models (LLMs) has significantly advanced prompt engineering, transitioning from manual design to model-based optimization. Prompts for LLMs generally comprise two components: the \textit{instruction}, which defines the task or objective, and the \textit{input}, which is tailored to the instruction type. In natural language generation (NLG) tasks such as machine translation, the \textit{input} component is particularly critical, while the \textit{instruction} component tends to be concise. Existing prompt engineering methods primarily focus on optimizing the \textit{instruction} component for general tasks, often requiring large-parameter LLMs as auxiliary tools. However, these approaches exhibit limited applicability for tasks like machine translation, where the \textit{input} component plays a more pivotal role. To address this limitation, this paper introduces a novel prompt optimization method specifically designed for machine translation tasks. The proposed approach employs a small-parameter model trained using a back-translation-based strategy, significantly reducing training overhead for single-task optimization while delivering highly effective performance. With certain adaptations, this method can also be extended to other downstream tasks.

Country of Origin
🇨🇳 China

Page Count
11 pages

Category
Computer Science:
Computation and Language