Taxation Perspectives from Large Language Models: A Case Study on Additional Tax Penalties
By: Eunkyung Choi , Young Jin Suh , Hun Park and more
Potential Business Impact:
Helps computers understand tax rules better.
How capable are large language models (LLMs) in the domain of taxation? Although numerous studies have explored the legal domain in general, research dedicated to taxation remain scarce. Moreover, the datasets used in these studies are either simplified, failing to reflect the real-world complexities, or unavailable as open source. To address this gap, we introduce PLAT, a new benchmark designed to assess the ability of LLMs to predict the legitimacy of additional tax penalties. PLAT is constructed to evaluate LLMs' understanding of tax law, particularly in cases where resolving the issue requires more than just applying related statutes. Our experiments with six LLMs reveal that their baseline capabilities are limited, especially when dealing with conflicting issues that demand a comprehensive understanding. However, we found that enabling retrieval, self-reasoning, and discussion among multiple agents with specific role assignments, this limitation can be mitigated.
Similar Papers
Can LLMs Identify Tax Abuse?
Computational Finance
AI finds new ways to save money on taxes.
Language Models and Logic Programs for Trustworthy Financial Reasoning
Computation and Language
Helps computers do taxes accurately and cheaply.
Evaluating the Role of Large Language Models in Legal Practice in India
Computation and Language
AI helps lawyers write and find legal problems.