Brevity is the soul of sustainability: Characterizing LLM response lengths
By: Soham Poddar , Paramita Koley , Janardan Misra and more
Potential Business Impact:
Makes AI give shorter, useful answers, saving energy.
A significant portion of the energy consumed by Large Language Models (LLMs) arises from their inference processes; hence developing energy-efficient methods for inference is crucial. While several techniques exist for inference optimization, output compression remains relatively unexplored, with only a few preliminary efforts addressing this aspect. In this work, we first benchmark 12 decoder-only LLMs across 5 datasets, revealing that these models often produce responses that are substantially longer than necessary. We then conduct a comprehensive quality assessment of LLM responses, formally defining six information categories present in LLM responses. We show that LLMs often tend to include redundant or additional information besides the minimal answer. To address this issue of long responses by LLMs, we explore several simple and intuitive prompt-engineering strategies. Empirical evaluation shows that appropriate prompts targeting length reduction and controlling information content can achieve significant energy optimization between 25-60\% by reducing the response length while preserving the quality of LLM responses.
Similar Papers
Green Prompting
Computation and Language
Makes AI use less electricity by changing its questions.
How Well do LLMs Compress Their Own Chain-of-Thought? A Token Complexity Approach
Computation and Language
Makes AI answer questions faster, but less accurate.
An Empirical Study of LLM Reasoning Ability Under Strict Output Length Constraint
Artificial Intelligence
Makes smart computer answers faster when time is short.