Score: 0

Verbatim Data Transcription Failures in LLM Code Generation: A State-Tracking Stress Test

Published: January 7, 2026 | arXiv ID: 2601.03640v1

By: Mohd Ariful Haque , Kishor Datta Gupta , Mohammad Ashiqur Rahman and more

Potential Business Impact:

Ensures computer code perfectly copies numbers.

Business Areas:
Text Analytics Data and Analytics, Software

Many real-world software tasks require exact transcription of provided data into code, such as cryptographic constants, protocol test vectors, allowlists, and calibration tables. These tasks are operationally sensitive because small omissions or alterations can remain silent while producing syntactically valid programs. This paper introduces a deliberately minimal transcription-to-code benchmark to isolate this reliability concern in LLM-based code generation. Given a list of high-precision decimal constants, a model must generate Python code that embeds the constants verbatim and performs a simple aggregate computation. We describe the prompting variants, evaluation protocol based on exact-string inclusion, and analysis framework used to characterize state-tracking and long-horizon generation failures. The benchmark is intended as a compact stress test that complements existing code-generation evaluations by focusing on data integrity rather than algorithmic reasoning.

Page Count
7 pages

Category
Computer Science:
Software Engineering