Online Minimization of Polarization and Disagreement via Low-Rank Matrix Bandits
By: Federico Cinus , Yuko Kuroki , Atsushi Miyauchi and more
Potential Business Impact:
Helps stop online arguments by learning opinions.
We study the problem of minimizing polarization and disagreement in the Friedkin-Johnsen opinion dynamics model under incomplete information. Unlike prior work that assumes a static setting with full knowledge of users' innate opinions, we address the more realistic online setting where innate opinions are unknown and must be learned through sequential observations. This novel setting, which naturally mirrors periodic interventions on social media platforms, is formulated as a regret minimization problem, establishing a key connection between algorithmic interventions on social media platforms and theory of multi-armed bandits. In our formulation, a learner observes only a scalar feedback of the overall polarization and disagreement after an intervention. For this novel bandit problem, we propose a two-stage algorithm based on low-rank matrix bandits. The algorithm first performs subspace estimation to identify an underlying low-dimensional structure, and then employs a linear bandit algorithm within the compact dimensional representation derived from the estimated subspace. We prove that our algorithm achieves an $ \widetilde{O}(\sqrt{T}) $ cumulative regret over any time horizon $T$. Empirical results validate that our algorithm significantly outperforms a linear bandit baseline in terms of both cumulative regret and running time.
Similar Papers
Efficient Algorithms for Relevant Quantities of Friedkin-Johnsen Opinion Dynamics Model
Social and Information Networks
Figures out how people's opinions change online.
How Bad Is Forming Your Own Multidimensional Opinion?
CS and Game Theory
Helps predict how groups form opinions on many things.
On the optimal regret of collaborative personalized linear bandits
Machine Learning (CS)
Helps many AI agents learn faster together.