Skim-Aware Contrastive Learning for Efficient Document Representation
By: Waheed Ahmed Abro, Zied Bouraoui
Although transformer-based models have shown strong performance in word- and sentence-level tasks, effectively representing long documents, especially in fields like law and medicine, remains difficult. Sparse attention mechanisms can handle longer inputs, but are resource-intensive and often fail to capture full-document context. Hierarchical transformer models offer better efficiency but do not clearly explain how they relate different sections of a document. In contrast, humans often skim texts, focusing on important sections to understand the overall message. Drawing from this human strategy, we introduce a new self-supervised contrastive learning framework that enhances long document representation. Our method randomly masks a section of the document and uses a natural language inference (NLI)-based contrastive objective to align it with relevant parts while distancing it from unrelated ones. This mimics how humans synthesize information, resulting in representations that are both richer and more computationally efficient. Experiments on legal and biomedical texts confirm significant gains in both accuracy and efficiency.
Similar Papers
Enhancing Long Document Long Form Summarisation with Self-Planning
Computation and Language
Makes summaries of long texts more accurate.
Advancing Text Classification with Large Language Models and Neural Attention Mechanisms
Computation and Language
Helps computers understand and sort text better.
Efficient Attention Mechanisms for Large Language Models: A Survey
Computation and Language
Makes computers understand long stories faster.