From Equations to Insights: Unraveling Symbolic Structures in PDEs with LLMs
By: Rohan Bhatnagar , Ling Liang , Krish Patel and more
Potential Business Impact:
AI finds hidden math rules in science problems.
Motivated by the remarkable success of artificial intelligence (AI) across diverse fields, the application of AI to solve scientific problems, often formulated as partial differential equations (PDEs), has garnered increasing attention. While most existing research concentrates on theoretical properties (such as well-posedness, regularity, and continuity) of the solutions, alongside direct AI-driven methods for solving PDEs, the challenge of uncovering symbolic relationships within these equations remains largely unexplored. In this paper, we propose leveraging large language models (LLMs) to learn such symbolic relationships. Our results demonstrate that LLMs can effectively predict the operators involved in PDE solutions by utilizing the symbolic information in the PDEs both theoretically and numerically. Furthermore, we show that discovering these symbolic relationships can substantially improve both the efficiency and accuracy of symbolic machine learning for finding analytical approximation of PDE solutions, delivering a fully interpretable solution pipeline. This work opens new avenues for understanding the symbolic structure of scientific problems and advancing their solution processes.
Similar Papers
Text-Trained LLMs Can Zero-Shot Extrapolate PDE Dynamics
Machine Learning (CS)
Computers predict future events from math patterns.
CodePDE: An Inference Framework for LLM-driven PDE Solver Generation
Machine Learning (CS)
Computers write code to solve hard math problems.
Generalizing PDE Emulation with Equation-Aware Neural Operators
Machine Learning (CS)
AI learns to solve many math problems faster.