An Empirical Study on Prompt Compression for Large Language Models
By: Zheng Zhang , Jinyi Li , Yihuai Lan and more
Potential Business Impact:
Shortens computer instructions, saves money and time.
Prompt engineering enables Large Language Models (LLMs) to perform a variety of tasks. However, lengthy prompts significantly increase computational complexity and economic costs. To address this issue, we study six prompt compression methods for LLMs, aiming to reduce prompt length while maintaining LLM response quality. In this paper, we present a comprehensive analysis covering aspects such as generation performance, model hallucinations, efficacy in multimodal tasks, word omission analysis, and more. We evaluate these methods across 13 datasets, including news, scientific articles, commonsense QA, math QA, long-context QA, and VQA datasets. Our experiments reveal that prompt compression has a greater impact on LLM performance in long contexts compared to short ones. In the Longbench evaluation, moderate compression even enhances LLM performance. Our code and data is available at https://github.com/3DAgentWorld/Toolkit-for-Prompt-Compression.
Similar Papers
Understanding and Improving Information Preservation in Prompt Compression for LLMs
Computation and Language
Makes AI understand long instructions better.
The Future of MLLM Prompting is Adaptive: A Comprehensive Experimental Evaluation of Prompt Engineering Methods for Robust Multimodal Performance
Artificial Intelligence
Teaches AI to understand pictures and words better.
Revisiting Prompt Engineering: A Comprehensive Evaluation for LLM-based Personalized Recommendation
Information Retrieval
Helps computers suggest things you'll like.