Finch: Benchmarking Finance & Accounting across Spreadsheet-Centric Enterprise Workflows
By: Haoyu Dong , Pengkun Zhang , Yan Gao and more
Potential Business Impact:
Tests how well AI can do real office jobs.
We introduce a finance & accounting benchmark (Finch) for evaluating AI agents on real-world, enterprise-grade professional workflows -- interleaving data entry, structuring, formatting, web search, cross-file retrieval, calculation, modeling, validation, translation, visualization, and reporting. Finch is sourced from authentic enterprise workspaces at Enron (15,000 spreadsheets and 500,000 emails from 150 employees) and other financial institutions, preserving in-the-wild messiness across multimodal artifacts (text, tables, formulas, charts, code, and images) and spanning diverse domains such as budgeting, trading, and asset management. We propose a workflow construction process that combines LLM-assisted discovery with expert annotation: (1) LLM-assisted, expert-verified derivation of workflows from real-world email threads and version histories of spreadsheet files, and (2) meticulous expert annotation for workflows, requiring over 700 hours of domain-expert effort. This yields 172 composite workflows with 384 tasks, involving 1,710 spreadsheets with 27 million cells, along with PDFs and other artifacts, capturing the intrinsically messy, long-horizon, knowledge-intensive, and collaborative nature of real-world enterprise work. We conduct both human and automated evaluations of frontier AI systems including GPT 5.1, Claude Sonnet 4.5, Gemini 3 Pro, Grok 4, and Qwen 3 Max, and GPT 5.1 Pro spends 48 hours in total yet passes only 38.4% of workflows, while Claude Sonnet 4.5 passes just 25.0%. Comprehensive case studies further surface the challenges that real-world enterprise workflows pose for AI agents.
Similar Papers
Finance Agent Benchmark: Benchmarking LLMs on Real-world Financial Research Tasks
Computational Engineering, Finance, and Science
Tests AI on real money problems, finds big gaps
FinSearchComp: Towards a Realistic, Expert-Level Evaluation of Financial Search and Reasoning
Machine Learning (CS)
Tests AI's ability to find and understand financial information.
Benchmarking LLM Agents for Wealth-Management Workflows
Artificial Intelligence
Lets AI assistants manage money tasks reliably.