To Explain Or Not To Explain: An Empirical Investigation Of AI-Based Recommendations On Social Media Platforms
By: AKM Bahalul Haque, A. K. M. Najmul Islam, Patrick Mikalef
Potential Business Impact:
Shows why social media suggests what it does.
AI based social media recommendations have great potential to improve the user experience. However, often these recommendations do not match the user interest and create an unpleasant experience for the users. Moreover, the recommendation system being a black box creates comprehensibility and transparency issues. This paper investigates social media recommendations from an end user perspective. For the investigation, we used the popular social media platform Facebook and recruited regular users to conduct a qualitative analysis. We asked participants about the social media content suggestions, their comprehensibility, and explainability. Our analysis shows users mostly require explanation whenever they encounter unfamiliar content and to ensure their online data security. Furthermore, the users require concise, non-technical explanations along with the facility of controlled information flow. In addition, we observed that explanations impact the users perception of transparency, trust, and understandability. Finally, we have outlined some design implications and presented a synthesized framework based on our data analysis.
Similar Papers
Context-Aware Visualization for Explainable AI Recommendations in Social Media: A Vision for User-Aligned Explanations
Artificial Intelligence
Makes social media suggestions easy to understand
Can AI Explanations Make You Change Your Mind?
Human-Computer Interaction
Helps people trust AI by showing how it thinks.
Designing Effective AI Explanations for Misinformation Detection: A Comparative Study of Content, Social, and Combined Explanations
Human-Computer Interaction
Helps computers spot fake news better.