Score: 2

Efficient Neural Clause-Selection Reinforcement

Published: March 10, 2025 | arXiv ID: 2503.07792v2

By: Martin Suda

Potential Business Impact:

Teaches computers to prove math problems faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Clause selection is arguably the most important choice point in saturation-based theorem proving. Framing it as a reinforcement learning (RL) task is a way to challenge the human-designed heuristics of state-of-the-art provers and to instead automatically evolve -- just from prover experiences -- their potentially optimal replacement. In this work, we present a neural network architecture for scoring clauses for clause selection that is powerful yet efficient to evaluate. Following RL principles to make design decisions, we integrate the network into the Vampire theorem prover and train it from successful proof attempts. An experiment on the diverse TPTP benchmark finds the neurally guided prover improve over a baseline strategy, from which it initially learns -- in terms of the number of in-training-unseen problems solved under a practically relevant, short CPU instruction limit -- by 20%.

Country of Origin
🇨🇿 Czech Republic

Repos / Data Links

Page Count
26 pages

Category
Computer Science:
Artificial Intelligence