Proof of Time: A Benchmark for Evaluating Scientific Idea Judgments
By: Bingyang Ye , Shan Chen , Jingxuan Tu and more
Large language models are increasingly being used to assess and forecast research ideas, yet we lack scalable ways to evaluate the quality of models' judgments about these scientific ideas. Towards this goal, we introduce PoT, a semi-verifiable benchmarking framework that links scientific idea judgments to downstream signals that become observable later (e.g., citations and shifts in researchers' agendas). PoT freezes a pre-cutoff snapshot of evidence in an offline sandbox and asks models to forecast post-cutoff outcomes, enabling verifiable evaluation when ground truth arrives, scalable benchmarking without exhaustive expert annotation, and analysis of human-model misalignment against signals such as peer-review awards. In addition, PoT provides a controlled testbed for agent-based research judgments that evaluate scientific ideas, comparing tool-using agents to non-agent baselines under prompt ablations and budget scaling. Across 30,000+ instances spanning four benchmark domains, we find that, compared with non-agent baselines, higher interaction budgets generally improve agent performance, while the benefit of tool use is strongly task-dependent. By combining time-partitioned, future-verifiable targets with an offline sandbox for tool use, PoT supports scalable evaluation of agents on future-facing scientific idea judgment tasks.
Similar Papers
PoETa v2: Toward More Robust Evaluation of Large Language Models in Portuguese
Computation and Language
Tests how well computers understand Portuguese.
Progress over Points: Reframing LM Benchmarks Around Scientific Objectives
Machine Learning (CS)
Makes AI learn faster, finding new ways.
ReEfBench: Quantifying the Reasoning Efficiency of LLMs
Artificial Intelligence
Finds if AI truly reasons or just talks a lot.