Scaling Laws for Economic Productivity: Experimental Evidence in LLM-Assisted Consulting, Data Analyst, and Management Tasks
By: Ali Merali
This paper derives `Scaling Laws for Economic Impacts' -- empirical relationships between the training compute of Large Language Models (LLMs) and professional productivity. In a preregistered experiment, over 500 consultants, data analysts, and managers completed professional tasks using one of 13 LLMs. We find that each year of AI model progress reduced task time by 8%, with 56% of gains driven by increased compute and 44% by algorithmic progress. However, productivity gains were significantly larger for non-agentic analytical tasks compared to agentic workflows requiring tool use. These findings suggest continued model scaling could boost U.S. productivity by approximately 20% over the next decade.
Similar Papers
Scaling Laws for Code: A More Data-Hungry Regime
Computation and Language
Makes computer code smarter with more data.
On the Origin of Algorithmic Progress in AI
Machine Learning (CS)
New AI learning methods make computers smarter faster.
The Impact of LLM-Assistants on Software Developer Productivity: A Systematic Literature Review
Software Engineering
Helps coders write programs faster and easier.