LACA: Improving Cross-lingual Aspect-Based Sentiment Analysis with LLM Data Augmentation
By: Jakub Šmíd, Pavel Přibáň, Pavel Král
Potential Business Impact:
Lets computers understand feelings in any language.
Cross-lingual aspect-based sentiment analysis (ABSA) involves detailed sentiment analysis in a target language by transferring knowledge from a source language with available annotated data. Most existing methods depend heavily on often unreliable translation tools to bridge the language gap. In this paper, we propose a new approach that leverages a large language model (LLM) to generate high-quality pseudo-labelled data in the target language without the need for translation tools. First, the framework trains an ABSA model to obtain predictions for unlabelled target language data. Next, LLM is prompted to generate natural sentences that better represent these noisy predictions than the original text. The ABSA model is then further fine-tuned on the resulting pseudo-labelled dataset. We demonstrate the effectiveness of this method across six languages and five backbone models, surpassing previous state-of-the-art translation-based approaches. The proposed framework also supports generative models, and we show that fine-tuned LLMs outperform smaller multilingual models.
Similar Papers
Advancing Cross-lingual Aspect-Based Sentiment Analysis with LLMs and Constrained Decoding for Sequence-to-Sequence Models
Computation and Language
Helps computers understand opinions in any language.
LLaMA-Based Models for Aspect-Based Sentiment Analysis
Computation and Language
Makes computers understand feelings about many things.
Large Language Models for Czech Aspect-Based Sentiment Analysis
Computation and Language
Helps computers understand feelings about specific things.