Fast Symbolic Regression Benchmarking
By: Viktor Martinek
Potential Business Impact:
Finds math rules in science data faster.
Symbolic regression (SR) uncovers mathematical models from data. Several benchmarks have been proposed to compare the performance of SR algorithms. However, existing ground-truth rediscovery benchmarks overemphasize the recovery of "the one" expression form or rely solely on computer algebra systems (such as SymPy) to assess success. Furthermore, existing benchmarks continue the expression search even after its discovery. We improve upon these issues by introducing curated lists of acceptable expressions, and a callback mechanism for early termination. As a starting point, we use the symbolic regression for scientific discovery (SRSD) benchmark problems proposed by Yoshitomo et al., and benchmark the two SR packages SymbolicRegression.jl and TiSR. The new benchmarking method increases the rediscovery rate of SymbolicRegression.jl from 26.7%, as reported by Yoshitomo et at., to 44.7%. Performing the benchmark takes 41.2% less computational expense. TiSR's rediscovery rate is 69.4%, while performing the benchmark saves 63% time.
Similar Papers
Call for Action: towards the next generation of symbolic regression benchmark
Machine Learning (CS)
Tests computer math formulas to find best ones.
Current Challenges of Symbolic Regression: Optimization, Selection, Model Simplification, and Benchmarking
Neural and Evolutionary Computing
Finds simpler math rules for better predictions.
Introduction to Symbolic Regression in the Physical Sciences
Machine Learning (CS)
Finds hidden math rules in science data.