Larger Is Not Always Better: Exploring Small Open-source Language Models in Logging Statement Generation
By: Renyi Zhong , Yichen Li , Guangba Yu and more
Potential Business Impact:
Helps computers write better code logs automatically.
Developers use logging statements to create logs that document system behavior and aid in software maintenance. As such, high-quality logging is essential for effective maintenance; however, manual logging often leads to errors and inconsistency. Recent methods emphasize using large language models (LLMs) for automated logging statement generation, but these present privacy and resource issues, hindering their suitability for enterprise use. This paper presents the first large-scale empirical study evaluating small open-source language models (SOLMs) for automated logging statement generation. We evaluate four prominent SOLMs using various prompt strategies and parameter-efficient fine-tuning techniques, such as Low-Rank Adaptation (LoRA) and Retrieval-Augmented Generation (RAG). Our results show that fine-tuned SOLMs with LoRA and RAG prompts, particularly Qwen2.5-coder-14B, outperform existing tools and LLM baselines in predicting logging locations and generating high-quality statements, with robust generalization across diverse repositories. These findings highlight SOLMs as a privacy-preserving, efficient alternative for automated logging.
Similar Papers
Large Language Models for Fault Localization: An Empirical Study
Software Engineering
Finds bugs in computer code faster.
KnowsLM: A framework for evaluation of small language models for knowledge augmentation and humanised conversations
Computation and Language
Makes AI better at talking and knowing facts.
Study on LLMs for Promptagator-Style Dense Retriever Training
Information Retrieval
Makes AI better at finding specific information.