Scientific judgment drifts over time in AI ideation
By: Lingyu Zhang, Mitchell Wang, Boyuan Chen
Potential Business Impact:
Helps AI learn how scientists judge ideas.
Scientific discovery begins with ideas, yet evaluating early-stage research concepts is a subtle and subjective human judgment. As large language models (LLMs) are increasingly tasked with generating scientific hypotheses, most systems assume that scientists' evaluations form a fixed gold standard, and that scientists' judgments do not change. Here we challenge this assumption. In a two-wave study with 7,182 ratings from 57 active researchers across six scientific departments, each participant repeatedly evaluated a constant "control" research idea alongside AI-generated ideas. We show that scientists' ratings of the very same idea systematically drift over time: overall quality scores increased by 0.61 points on a 0-10 scale (P = 0.005), and test-retest reliability was only moderate across core dimensions of scientific value, revealing systematic temporal drift in perceived idea quality. Yet the internal structure of judgment remained stable, such as the relative importance placed on originality, feasibility, clarity. We then aligned an LLM-based ideation system to first-wave human ratings and used it to select new ideas. Although alignment improved agreement with Wave-1 evaluations, its apparent gains disappeared once drift in human standards was accounted for. Thus, tuning to a fixed human snapshot produced improvements that were transient rather than persistent. These findings reveal that human evaluation of scientific ideas is not static but a dynamic process with stable priorities and requires shifting calibration. Treating one-time human ratings as immutable ground truth risks overstating progress in AI-assisted ideation and obscuring the challenge of co-evolving with changing expert standards. Drift-aware evaluation protocols and longitudinal benchmarks may therefore be essential for building AI systems that reliably augment, rather than overfit to, human scientific judgment.
Similar Papers
Bias in the Loop: How Humans Evaluate AI-Generated Suggestions
Human-Computer Interaction
Helps people work better with computers.
On the Influence of Artificial Intelligence on Human Problem-Solving: Empirical Insights for the Third Wave in a Multinational Longitudinal Pilot Study
Computers and Society
Helps people check AI answers better.
AI Judges in Design: Statistical Perspectives on Achieving Human Expert Equivalence With Vision-Language Models
Artificial Intelligence
AI judges design ideas as well as people.