An MLCommons Scientific Benchmarks Ontology
By: Ben Hawks , Gregor von Laszewski , Matthew D. Sinclair and more
Potential Business Impact:
Standardizes science tests for smarter computer learning.
Scientific machine learning research spans diverse domains and data modalities, yet existing benchmark efforts remain siloed and lack standardization. This makes novel and transformative applications of machine learning to critical scientific use-cases more fragmented and less clear in pathways to impact. This paper introduces an ontology for scientific benchmarking developed through a unified, community-driven effort that extends the MLCommons ecosystem to cover physics, chemistry, materials science, biology, climate science, and more. Building on prior initiatives such as XAI-BENCH, FastML Science Benchmarks, PDEBench, and the SciMLBench framework, our effort consolidates a large set of disparate benchmarks and frameworks into a single taxonomy of scientific, application, and system-level benchmarks. New benchmarks can be added through an open submission workflow coordinated by the MLCommons Science Working Group and evaluated against a six-category rating rubric that promotes and identifies high-quality benchmarks, enabling stakeholders to select benchmarks that meet their specific needs. The architecture is extensible, supporting future scientific and AI/ML motifs, and we discuss methods for identifying emerging computing patterns for unique scientific workloads. The MLCommons Science Benchmarks Ontology provides a standardized, scalable foundation for reproducible, cross-domain benchmarking in scientific machine learning. A companion webpage for this work has also been developed as the effort evolves: https://mlcommons-science.github.io/benchmark/
Similar Papers
Computational Law: Datasets, Benchmarks, and Ontologies
Computation and Language
Helps computers understand and use laws better.
Common Task Framework For a Critical Evaluation of Scientific Machine Learning Algorithms
Computational Engineering, Finance, and Science
Makes science computers more trustworthy and fair.
Common Task Framework For a Critical Evaluation of Scientific Machine Learning Algorithms
Computational Engineering, Finance, and Science
Makes science computers more trustworthy and fair.