LongRecall: A Structured Approach for Robust Recall Evaluation in Long-Form Text
By: MohamamdJavad Ardestani, Ehsan Kamalloo, Davood Rafiei
Potential Business Impact:
Makes AI remember all important details in answers.
LongRecall. The completeness of machine-generated text, ensuring that it captures all relevant information, is crucial in domains such as medicine and law and in tasks like list-based question answering (QA), where omissions can have serious consequences. However, existing recall metrics often depend on lexical overlap, leading to errors with unsubstantiated entities and paraphrased answers, while LLM-as-a-Judge methods with long holistic prompts capture broader semantics but remain prone to misalignment and hallucinations without structured verification. We introduce LongRecall, a general three-stage recall evaluation framework that decomposes answers into self-contained facts, successively narrows plausible candidate matches through lexical and semantic filtering, and verifies their alignment through structured entailment checks. This design reduces false positives and false negatives while accommodating diverse phrasings and contextual variations, serving as a foundational building block for systematic recall assessment. We evaluate LongRecall on three challenging long-form QA benchmarks using both human annotations and LLM-based judges, demonstrating substantial improvements in recall accuracy over strong lexical and LLM-as-a-Judge baselines.
Similar Papers
LONGQAEVAL: Designing Reliable Evaluations of Long-Form Clinical QA under Resource Constraints
Computation and Language
Tests doctor AI answers faster, cheaper.
Enhancing Long Document Long Form Summarisation with Self-Planning
Computation and Language
Makes summaries of long texts more accurate.
How important is Recall for Measuring Retrieval Quality?
Computation and Language
Finds best answers even when you don't know all the facts.