Analyzing Political Text at Scale with Online Tensor LDA
By: Sara Kangaslahti , Danny Ebanks , Jean Kossaifi and more
Potential Business Impact:
Lets computers understand huge amounts of text fast.
This paper proposes a topic modeling method that scales linearly to billions of documents. We make three core contributions: i) we present a topic modeling method, Tensor Latent Dirichlet Allocation (TLDA), that has identifiable and recoverable parameter guarantees and sample complexity guarantees for large data; ii) we show that this method is computationally and memory efficient (achieving speeds over 3-4x those of prior parallelized Latent Dirichlet Allocation (LDA) methods), and that it scales linearly to text datasets with over a billion documents; iii) we provide an open-source, GPU-based implementation, of this method. This scaling enables previously prohibitive analyses, and we perform two real-world, large-scale new studies of interest to political scientists: we provide the first thorough analysis of the evolution of the #MeToo movement through the lens of over two years of Twitter conversation and a detailed study of social media conversations about election fraud in the 2020 presidential election. Thus this method provides social scientists with the ability to study very large corpora at scale and to answer important theoretically-relevant questions about salient issues in near real-time.
Similar Papers
Automated Sentiment Classification and Topic Discovery in Large-Scale Social Media Streams
Computation and Language
Finds what people feel and talk about online.
Quantifying consistency and accuracy of Latent Dirichlet Allocation
Computation and Language
Finds real topics in messy text data.
Latent Topic Synthesis: Leveraging LLMs for Electoral Ad Analysis
Computation and Language
Sorts political ads by what they talk about.