FinLFQA: Evaluating Attributed Text Generation of LLMs in Financial Long-Form Question Answering
By: Yitao Long , Tiansheng Hu , Yilun Zhao and more
Potential Business Impact:
Helps AI give correct answers with proof.
Large Language Models (LLMs) frequently hallucinate to long-form questions, producing plausible yet factually incorrect answers. A common mitigation strategy is to provide attribution to LLM outputs. However, existing benchmarks primarily focus on simple attribution that retrieves supporting textual evidence as references. We argue that in real-world scenarios such as financial applications, attribution goes beyond reference retrieval. We introduce FinLFQA, a benchmark designed to evaluate the ability of LLMs to generate long-form answers to complex financial questions with reliable and nuanced attributions. FinLFQA evaluates three critical aspects of attribution through human annotations: (1) supporting evidence extracted from financial reports, (2) intermediate numerical reasoning steps, and (3) domain-specific financial knowledge that informs the reasoning process. We further provide an automatic evaluation framework covering both answer quality and attribution quality. Through extensive experiments on eight LLMs across multiple attribution-generation paradigms, we find that fine-grained metrics are important to distinguish model capabilities, that end-to-end generation achieves comparable performance to post-hoc approaches, and that iterative refinement only helps when guided by external feedback.
Similar Papers
An Empirical Study of Evaluating Long-form Question Answering
Information Retrieval
Makes computers write better, longer answers.
On Synthesizing Data for Context Attribution in Question Answering
Information Retrieval
Makes AI answers show where they found info.
Document Attribution: Examining Citation Relationships using Large Language Models
Information Retrieval
Checks if AI answers come from the right documents.