Selecting Language Models for Social Science: Start Small, Start Open, and Validate
By: Dustin S. Stoltz, Marshall A. Taylor, Sanuj Kumar
Potential Business Impact:
Helps pick the best AI for science projects.
Currently, there are thousands of large pretrained language models (LLMs) available to social scientists. How do we select among them? Using validity, reliability, reproducibility, and replicability as guides, we explore the significance of: (1) model openness, (2) model footprint, (3) training data, and (4) model architectures and fine-tuning. While ex-ante tests of validity (i.e., benchmarks) are often privileged in these discussions, we argue that social scientists cannot altogether avoid validating computational measures (ex-post). Replicability, in particular, is a more pressing guide for selecting language models. Being able to reliably replicate a particular finding that entails the use of a language model necessitates reliably reproducing a task. To this end, we propose starting with smaller, open models, and constructing delimited benchmarks to demonstrate the validity of the entire computational pipeline.
Similar Papers
A validity-guided workflow for robust large language model research in psychology
Human-Computer Interaction
Fixes AI's fake feelings to show real results.
Guidelines for Empirical Studies in Software Engineering involving Large Language Models
Software Engineering
Makes computer studies easier to check and repeat.
Guidelines for Empirical Studies in Software Engineering involving Large Language Models
Software Engineering
Makes computer studies easier to check and repeat.