Few-shot Cross-lingual Aspect-Based Sentiment Analysis with Sequence-to-Sequence Models
By: Jakub Šmíd, Pavel Přibáň, Pavel Král
Potential Business Impact:
Teaches computers to understand opinions in new languages.
Aspect-based sentiment analysis (ABSA) has received substantial attention in English, yet challenges remain for low-resource languages due to the scarcity of labelled data. Current cross-lingual ABSA approaches often rely on external translation tools and overlook the potential benefits of incorporating a small number of target language examples into training. In this paper, we evaluate the effect of adding few-shot target language examples to the training set across four ABSA tasks, six target languages, and two sequence-to-sequence models. We show that adding as few as ten target language examples significantly improves performance over zero-shot settings and achieves a similar effect to constrained decoding in reducing prediction errors. Furthermore, we demonstrate that combining 1,000 target language examples with English data can even surpass monolingual baselines. These findings offer practical insights for improving cross-lingual ABSA in low-resource and domain-specific settings, as obtaining ten high-quality annotated examples is both feasible and highly effective.
Similar Papers
Advancing Cross-lingual Aspect-Based Sentiment Analysis with LLMs and Constrained Decoding for Sequence-to-Sequence Models
Computation and Language
Helps computers understand opinions in any language.
Improving Generative Cross-lingual Aspect-Based Sentiment Analysis with Constrained Decoding
Computation and Language
Helps computers understand feelings in many languages.
Cross-lingual Aspect-Based Sentiment Analysis: A Survey on Tasks, Approaches, and Challenges
Computation and Language
Helps computers understand opinions in many languages.