Document Attribution: Examining Citation Relationships using Large Language Models
By: Vipula Rawte , Ryan A. Rossi , Franck Dernoncourt and more
Potential Business Impact:
Checks if AI answers come from the right documents.
As Large Language Models (LLMs) are increasingly applied to document-based tasks - such as document summarization, question answering, and information extraction - where user requirements focus on retrieving information from provided documents rather than relying on the model's parametric knowledge, ensuring the trustworthiness and interpretability of these systems has become a critical concern. A central approach to addressing this challenge is attribution, which involves tracing the generated outputs back to their source documents. However, since LLMs can produce inaccurate or imprecise responses, it is crucial to assess the reliability of these citations. To tackle this, our work proposes two techniques. (1) A zero-shot approach that frames attribution as a straightforward textual entailment task. Our method using flan-ul2 demonstrates an improvement of 0.27% and 2.4% over the best baseline of ID and OOD sets of AttributionBench, respectively. (2) We also explore the role of the attention mechanism in enhancing the attribution process. Using a smaller LLM, flan-t5-small, the F1 scores outperform the baseline across almost all layers except layer 4 and layers 8 through 11.
Similar Papers
Zero-shot data citation function classification using transformer-based large language models (LLMs)
Machine Learning (CS)
Helps understand how science papers use data.
Attribution, Citation, and Quotation: A Survey of Evidence-based Text Generation with Large Language Models
Computation and Language
Makes AI stories show where their facts came from.
FinLFQA: Evaluating Attributed Text Generation of LLMs in Financial Long-Form Question Answering
Computation and Language
Helps AI give correct answers with proof.