Can LLMs Faithfully Explain Themselves in Low-Resource Languages? A Case Study on Emotion Detection in Persian
By: Mobina Mehrazar , Mohammad Amin Yousefi , Parisa Abolfath Beygi and more
Potential Business Impact:
Makes AI explain its thoughts more honestly.
Large language models (LLMs) are increasingly used to generate self-explanations alongside their predictions, a practice that raises concerns about the faithfulness of these explanations, especially in low-resource languages. This study evaluates the faithfulness of LLM-generated explanations in the context of emotion classification in Persian, a low-resource language, by comparing the influential words identified by the model against those identified by human annotators. We assess faithfulness using confidence scores derived from token-level log-probabilities. Two prompting strategies, differing in the order of explanation and prediction (Predict-then-Explain and Explain-then-Predict), are tested for their impact on explanation faithfulness. Our results reveal that while LLMs achieve strong classification performance, their generated explanations often diverge from faithful reasoning, showing greater agreement with each other than with human judgments. These results highlight the limitations of current explanation methods and metrics, emphasizing the need for more robust approaches to ensure LLM reliability in multilingual and low-resource contexts.
Similar Papers
Walk the Talk? Measuring the Faithfulness of Large Language Model Explanations
Computation and Language
Checks if AI's answers are honest.
Towards Transparent Reasoning: What Drives Faithfulness in Large Language Models?
Computation and Language
Makes AI give honest reasons for its answers.
Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
Computation and Language
Checks if AI's answers truly match its thinking.