Analyzing Generalization in Pre-Trained Symbolic Regression
By: Henrik Voigt , Paul Kahlmeyer , Kai Lawonn and more
Potential Business Impact:
Finds math formulas that work for new problems.
Symbolic regression algorithms search a space of mathematical expressions for formulas that explain given data. Transformer-based models have emerged as a promising, scalable approach shifting the expensive combinatorial search to a large-scale pre-training phase. However, the success of these models is critically dependent on their pre-training data. Their ability to generalize to problems outside of this pre-training distribution remains largely unexplored. In this work, we conduct a systematic empirical study to evaluate the generalization capabilities of pre-trained, transformer-based symbolic regression. We rigorously test performance both within the pre-training distribution and on a series of out-of-distribution challenges for several state of the art approaches. Our findings reveal a significant dichotomy: while pre-trained models perform well in-distribution, the performance consistently degrades in out-of-distribution scenarios. We conclude that this generalization gap is a critical barrier for practitioners, as it severely limits the practical use of pre-trained approaches for real-world applications.
Similar Papers
Propositional Logic for Probing Generalization in Neural Networks
Machine Learning (CS)
Computers struggle to learn logic rules.
Provable test-time adaptivity and distributional robustness of in-context learning
Machine Learning (Stat)
AI learns better from mixed-difficulty training.
When can in-context learning generalize out of task distribution?
Machine Learning (CS)
Teaches computers to learn new things from few examples.