Score: 1

Prompt-MII: Meta-Learning Instruction Induction for LLMs

Published: October 19, 2025 | arXiv ID: 2510.16932v1

By: Emily Xiao , Yixiao Zeng , Ada Chen and more

Potential Business Impact:

Makes AI answer questions using less information.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

A popular method to adapt large language models (LLMs) to new tasks is in-context learning (ICL), which is effective but incurs high inference costs as context length grows. In this paper we propose a method to perform instruction induction, where we take training examples and reduce them to a compact but descriptive prompt that can achieve performance comparable to ICL over the full training set. Specifically, we propose PROMPT-MII, a reinforcement learning (RL) based framework to meta-learn an instruction induction model that can generate compact instructions on the fly for an arbitrary new dataset. We train on over 3,000 diverse classification datasets from the HuggingFace hub, and evaluate on 90 unseen tasks. PROMPT-MII improves downstream model quality by 4-9 F1 points (10-20% relative), matching ICL performance while requiring 3-13x fewer tokens.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Computation and Language