Score: 1

Selecting Language Models for Social Science: Start Small, Start Open, and Validate

Published: January 16, 2026 | arXiv ID: 2601.10926v1

By: Dustin S. Stoltz, Marshall A. Taylor, Sanuj Kumar

Potential Business Impact:

Helps pick the best AI for science projects.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Currently, there are thousands of large pretrained language models (LLMs) available to social scientists. How do we select among them? Using validity, reliability, reproducibility, and replicability as guides, we explore the significance of: (1) model openness, (2) model footprint, (3) training data, and (4) model architectures and fine-tuning. While ex-ante tests of validity (i.e., benchmarks) are often privileged in these discussions, we argue that social scientists cannot altogether avoid validating computational measures (ex-post). Replicability, in particular, is a more pressing guide for selecting language models. Being able to reliably replicate a particular finding that entails the use of a language model necessitates reliably reproducing a task. To this end, we propose starting with smaller, open models, and constructing delimited benchmarks to demonstrate the validity of the entire computational pipeline.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
Computation and Language