Towards a Large Physics Benchmark
By: Kristian G. Barman , Sascha Caron , Faegheh Hasibi and more
Potential Business Impact:
Tests AI to help scientists discover new physics.
We introduce a benchmark framework developed by and for the scientific community to evaluate, monitor and steer large language model development in fundamental physics. Building on philosophical concepts of scientific understanding and creativity, we develop a scoring system in which each question is scored by an expert for its correctness, difficulty, and surprise. The questions are of three forms: (i) multiple-choice questions for conceptual understanding, (ii) analytical problems requiring mathematical derivation, and (iii) openended tasks requiring complex problem solving. Our current dataset contains diverse set of examples, including a machine learning challenge to classify high-energy physics events, such as the four top quark signal. To ensure continued relevance, we propose a living benchmark, where physicists contribute questions, for instance alongside new publications. We invite contributions via: http://www.physicsbenchmarks.org/. We hope that this benchmark will enable a targeted AI development that can make a meaningful contribution to fundamental physics research.
Similar Papers
Theoretical Physics Benchmark (TPBench) -- a Dataset and Study of AI Reasoning Capabilities in Theoretical Physics
Machine Learning (CS)
Tests if AI can solve hard science puzzles.
ABench-Physics: Benchmarking Physical Reasoning in LLMs via High-Difficulty and Dynamic Physics Problems
Machine Learning (CS)
Tests if computers can solve hard physics problems.
PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning
Artificial Intelligence
Tests if computers can solve hard physics problems.