LTG at SemEval-2025 Task 10: Optimizing Context for Classification of Narrative Roles
By: Egil Rønningstad, Gaurav Negi
Potential Business Impact:
Helps computers understand stories with less text.
Our contribution to the SemEval 2025 shared task 10, subtask 1 on entity framing, tackles the challenge of providing the necessary segments from longer documents as context for classification with a masked language model. We show that a simple entity-oriented heuristics for context selection can enable text classification using models with limited context window. Our context selection approach and the XLM-RoBERTa language model is on par with, or outperforms, Supervised Fine-Tuning with larger generative language models.
Similar Papers
Fane at SemEval-2025 Task 10: Zero-Shot Entity Framing with Large Language Models
Computation and Language
Helps computers understand how news stories frame people.
Context-Aware Semantic Segmentation: Enhancing Pixel-Level Understanding with Large Language Models for Advanced Vision Applications
CV and Pattern Recognition
Helps computers understand pictures like people do.
A Survey on Transformer Context Extension: Approaches and Evaluation
Computation and Language
Helps computers understand long stories better.