Score: 0

Order Matters: Rethinking Prompt Construction in In-Context Learning

Published: November 12, 2025 | arXiv ID: 2511.09700v1

By: Warren Li , Yiqian Wang , Zihan Wang and more

Potential Business Impact:

Ordering examples helps computers learn better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In-context learning (ICL) enables large language models to perform new tasks by conditioning on a sequence of examples. Most prior work reasonably and intuitively assumes that which examples are chosen has a far greater effect on performance than how those examples are ordered, leading to a focus on example selection. We revisit this assumption and conduct a systematic comparison between the effect of selection and ordering. Through controlled experiments on both classification and generation tasks, using multiple open-source model families (0.5B to 27B parameters) and GPT-5, we find that the variance in performance due to different example orderings is comparable to that from using entirely different example sets. Furthermore, we show that strong orderings can be identified using only a development set, achieving performance close to an oracle that selects the best ordering based on test labels. Our findings highlight the equal and intertwined importance of example selection and ordering in prompt design, calling for a reexamination of the assumptions held in ICL.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
Computation and Language