DOREMI: Optimizing Long Tail Predictions in Document-Level Relation Extraction
By: Laura Menotti, Stefano Marchesin, Gianmaria Silvello
Potential Business Impact:
Teaches computers to find rare facts in documents.
Document-Level Relation Extraction (DocRE) presents significant challenges due to its reliance on cross-sentence context and the long-tail distribution of relation types, where many relations have scarce training examples. In this work, we introduce DOcument-level Relation Extraction optiMizing the long taIl (DOREMI), an iterative framework that enhances underrepresented relations through minimal yet targeted manual annotations. Unlike previous approaches that rely on large-scale noisy data or heuristic denoising, DOREMI actively selects the most informative examples to improve training efficiency and robustness. DOREMI can be applied to any existing DocRE model and is effective at mitigating long-tail biases, offering a scalable solution to improve generalization on rare relations.
Similar Papers
Combining Distantly Supervised Models with In Context Learning for Monolingual and Cross-Lingual Relation Extraction
Computation and Language
Helps computers find relationships in text better.
GLiDRE: Generalist Lightweight model for Document-level Relation Extraction
Computation and Language
Helps computers understand relationships between words in long texts.
COMM:Concentrated Margin Maximization for Robust Document-Level Relation Extraction
Computation and Language
Helps computers find hidden connections in long texts.