Efficient Neural Clause-Selection Reinforcement
By: Martin Suda
Potential Business Impact:
Teaches computers to prove math problems faster.
Clause selection is arguably the most important choice point in saturation-based theorem proving. Framing it as a reinforcement learning (RL) task is a way to challenge the human-designed heuristics of state-of-the-art provers and to instead automatically evolve -- just from prover experiences -- their potentially optimal replacement. In this work, we present a neural network architecture for scoring clauses for clause selection that is powerful yet efficient to evaluate. Following RL principles to make design decisions, we integrate the network into the Vampire theorem prover and train it from successful proof attempts. An experiment on the diverse TPTP benchmark finds the neurally guided prover improve over a baseline strategy, from which it initially learns -- in terms of the number of in-training-unseen problems solved under a practically relevant, short CPU instruction limit -- by 20%.
Similar Papers
Neural Approaches to SAT Solving: Design Choices and Interpretability
Machine Learning (CS)
Helps computers solve hard puzzles faster.
Proof-Driven Clause Learning in Neural Network Verification
Logic in Computer Science
Checks if AI makes safe decisions.
LangSAT: A Novel Framework Combining NLP and Reinforcement Learning for SAT Solving
Computation and Language
Lets computers solve logic puzzles from English.