FinSearchComp: Towards a Realistic, Expert-Level Evaluation of Financial Search and Reasoning
By: Liang Hu , Jianpeng Jiao , Jiashuo Liu and more
Potential Business Impact:
Tests AI's ability to find and understand financial information.
Search has emerged as core infrastructure for LLM-based agents and is widely viewed as critical on the path toward more general intelligence. Finance is a particularly demanding proving ground: analysts routinely conduct complex, multi-step searches over time-sensitive, domain-specific data, making it ideal for assessing both search proficiency and knowledge-grounded reasoning. Yet no existing open financial datasets evaluate data searching capability of end-to-end agents, largely because constructing realistic, complicated tasks requires deep financial expertise and time-sensitive data is hard to evaluate. We present FinSearchComp, the first fully open-source agent benchmark for realistic, open-domain financial search and reasoning. FinSearchComp comprises three tasks -- Time-Sensitive Data Fetching, Simple Historical Lookup, and Complex Historical Investigation -- closely reproduce real-world financial analyst workflows. To ensure difficulty and reliability, we engage 70 professional financial experts for annotation and implement a rigorous multi-stage quality-assurance pipeline. The benchmark includes 635 questions spanning global and Greater China markets, and we evaluate 21 models (products) on it. Grok 4 (web) tops the global subset, approaching expert-level accuracy. DouBao (web) leads on the Greater China subset. Experimental analyses show that equipping agents with web search and financial plugins substantially improves results on FinSearchComp, and the country origin of models and tools impact performance significantly.By aligning with realistic analyst tasks and providing end-to-end evaluation, FinSearchComp offers a professional, high-difficulty testbed for complex financial search and reasoning.
Similar Papers
Finance Agent Benchmark: Benchmarking LLMs on Real-world Financial Research Tasks
Computational Engineering, Finance, and Science
Tests AI on real money problems, finds big gaps
BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent
Computation and Language
Tests AI that finds answers online better.
XFinBench: Benchmarking LLMs in Complex Financial Problem Solving and Reasoning
Computation and Language
Tests computers on hard money problems.