OLAF: Towards Robust LLM-Based Annotation Framework in Empirical Software Engineering
By: Mia Mohammad Imran, Tarannum Shaila Zaman
Large Language Models (LLMs) are increasingly used in empirical software engineering (ESE) to automate or assist annotation tasks such as labeling commits, issues, and qualitative artifacts. Yet the reliability and reproducibility of such annotations remain underexplored. Existing studies often lack standardized measures for reliability, calibration, and drift, and frequently omit essential configuration details. We argue that LLM-based annotation should be treated as a measurement process rather than a purely automated activity. In this position paper, we outline the \textbf{Operationalization for LLM-based Annotation Framework (OLAF)}, a conceptual framework that organizes key constructs: \textit{reliability, calibration, drift, consensus, aggregation}, and \textit{transparency}. The paper aims to motivate methodological discussion and future empirical work toward more transparent and reproducible LLM-based annotation in software engineering research.
Similar Papers
Evaluation Guidelines for Empirical Studies in Software Engineering involving LLMs
Software Engineering
Makes computer research with AI easier to check.
Guidelines for Empirical Studies in Software Engineering involving Large Language Models
Software Engineering
Makes computer studies easier to check and repeat.
Guidelines for Empirical Studies in Software Engineering involving Large Language Models
Software Engineering
Makes computer studies easier to check and repeat.