Evaluating Accounting Reasoning Capabilities of Large Language Models
By: Jie Zhou , Xin Chen , Jie Zhang and more
Potential Business Impact:
Helps computers do accounting jobs better.
Large language models are transforming learning, cognition, and research across many fields. Effectively integrating them into professional domains, such as accounting, is a key challenge for enterprise digital transformation. To address this, we define vertical domain accounting reasoning and propose evaluation criteria derived from an analysis of the training data characteristics of representative GLM models. These criteria support systematic study of accounting reasoning and provide benchmarks for performance improvement. Using this framework, we evaluate GLM-6B, GLM-130B, GLM-4, and OpenAI GPT-4 on accounting reasoning tasks. Results show that prompt design significantly affects performance, with GPT-4 demonstrating the strongest capability. Despite these gains, current models remain insufficient for real-world enterprise accounting, indicating the need for further optimization to unlock their full practical value.
Similar Papers
Exploring the Vertical-Domain Reasoning Capabilities of Large Language Models
Computation and Language
Helps computers do accounting jobs better.
Human-Level Reasoning: A Comparative Study of Large Language Models on Logical and Abstract Reasoning
Artificial Intelligence
Tests if AI can think like a person.
Evaluating Large Language Models for Financial Reasoning: A CFA-Based Benchmark Study
Computation and Language
Helps AI understand money questions better.