Regulatory gray areas of LLM Terms
By: Brittany I. Davidson , Kate Muir , Florian A. D. Burnat and more
Large Language Models (LLMs) are increasingly integrated into academic research pipelines; however, the Terms of Service governing their use remain under-examined. We present a comparative analysis of the Terms of Service of five major LLM providers (Anthropic, DeepSeek, Google, OpenAI, and xAI) collected in November 2025. Our analysis reveals substantial variation in the stringency and specificity of usage restrictions for general users and researchers. We identify specific complexities for researchers in security research, computational social sciences, and psychological studies. We identify `regulatory gray areas' where Terms of Service create uncertainty for legitimate use. We contribute a publicly available resource comparing terms across platforms (OSF) and discuss implications for general users and researchers navigating this evolving landscape.
Similar Papers
Large Language Models as a (Bad) Security Norm in the Context of Regulation and Compliance
Computers and Society
AI can make computer security unsafe and illegal.
LLMs4All: A Review on Large Language Models for Research and Applications in Academic Disciplines
Computation and Language
AI helps study many school subjects better.
From Legal Text to Tech Specs: Generative AI's Interpretation of Consent in Privacy Law
Software Engineering
Helps apps follow privacy rules automatically.