Score: 0

Evaluating Accounting Reasoning Capabilities of Large Language Models

Published: January 10, 2026 | arXiv ID: 2601.06707v1

By: Jie Zhou , Xin Chen , Jie Zhang and more

Potential Business Impact:

Helps computers do accounting jobs better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models are transforming learning, cognition, and research across many fields. Effectively integrating them into professional domains, such as accounting, is a key challenge for enterprise digital transformation. To address this, we define vertical domain accounting reasoning and propose evaluation criteria derived from an analysis of the training data characteristics of representative GLM models. These criteria support systematic study of accounting reasoning and provide benchmarks for performance improvement. Using this framework, we evaluate GLM-6B, GLM-130B, GLM-4, and OpenAI GPT-4 on accounting reasoning tasks. Results show that prompt design significantly affects performance, with GPT-4 demonstrating the strongest capability. Despite these gains, current models remain insufficient for real-world enterprise accounting, indicating the need for further optimization to unlock their full practical value.

Page Count
8 pages

Category
Computer Science:
Computation and Language