Score: 0

Instructional Prompt Optimization for Few-Shot LLM-Based Recommendations on Cold-Start Users

Published: September 11, 2025 | arXiv ID: 2509.09066v1

By: Haowei Yang , Yushang Zhao , Sitao Min and more

Potential Business Impact:

Helps new users get good suggestions faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The cold-start user issue further compromises the effectiveness of recommender systems in limiting access to the historical behavioral information. It is an effective pipeline to optimize instructional prompts on a few-shot large language model (LLM) used in recommender tasks. We introduce a context-conditioned prompt formulation method P(u,\ Ds)\ \rightarrow\ R\widehat, where u is a cold-start user profile, Ds is a curated support set, and R\widehat is the predicted ranked list of items. Based on systematic experimentation with transformer-based autoregressive LLMs (BioGPT, LLaMA-2, GPT-4), we provide empirical evidence that optimal exemplar injection and instruction structuring can significantly improve the precision@k and NDCG scores of such models in low-data settings. The pipeline uses token-level alignments and embedding space regularization with a greater semantic fidelity. Our findings not only show that timely composition is not merely syntactic but also functional as it is in direct control of attention scales and decoder conduct through inference. This paper shows that prompt-based adaptation may be considered one of the ways to address cold-start recommendation issues in LLM-based pipelines.

Country of Origin
🇺🇸 United States

Page Count
5 pages

Category
Computer Science:
Artificial Intelligence