Do Prompts Reshape Representations? An Empirical Study of Prompting Effects on Embeddings
By: Cesar Gonzalez-Gutierrez, Dirk Hovy
Potential Business Impact:
Makes computers understand tasks better, even new ones.
Prompting is a common approach for leveraging LMs in zero-shot settings. However, the underlying mechanisms that enable LMs to perform diverse tasks without task-specific supervision remain poorly understood. Studying the relationship between prompting and the quality of internal representations can shed light on how pre-trained embeddings may support in-context task solving. In this empirical study, we conduct a series of probing experiments on prompt embeddings, analyzing various combinations of prompt templates for zero-shot classification. Our findings show that while prompting affects the quality of representations, these changes do not consistently correlate with the relevance of the prompts to the target task. This result challenges the assumption that more relevant prompts necessarily lead to better representations. We further analyze potential factors that may contribute to this unexpected behavior.
Similar Papers
Beyond the Hype: Embeddings vs. Prompting for Multiclass Classification Tasks
Machine Learning (CS)
Computers can sort jobs better than AI.
Are Prompts All You Need? Evaluating Prompt-Based Large Language Models (LLM)s for Software Requirements Classification
Software Engineering
Helps computers sort software ideas faster, needing less data.
Which Prompting Technique Should I Use? An Empirical Investigation of Prompting Techniques for Software Engineering Tasks
Software Engineering
Makes AI better at writing and fixing computer code.