Score: 1

Brevity is the soul of sustainability: Characterizing LLM response lengths

Published: June 10, 2025 | arXiv ID: 2506.08686v2

By: Soham Poddar , Paramita Koley , Janardan Misra and more

Potential Business Impact:

Makes AI give shorter, useful answers, saving energy.

Business Areas:
Energy Efficiency Energy, Sustainability

A significant portion of the energy consumed by Large Language Models (LLMs) arises from their inference processes; hence developing energy-efficient methods for inference is crucial. While several techniques exist for inference optimization, output compression remains relatively unexplored, with only a few preliminary efforts addressing this aspect. In this work, we first benchmark 12 decoder-only LLMs across 5 datasets, revealing that these models often produce responses that are substantially longer than necessary. We then conduct a comprehensive quality assessment of LLM responses, formally defining six information categories present in LLM responses. We show that LLMs often tend to include redundant or additional information besides the minimal answer. To address this issue of long responses by LLMs, we explore several simple and intuitive prompt-engineering strategies. Empirical evaluation shows that appropriate prompts targeting length reduction and controlling information content can achieve significant energy optimization between 25-60\% by reducing the response length while preserving the quality of LLM responses.

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Computation and Language