The 6th International Verification of Neural Networks Competition (VNN-COMP 2025): Summary and Results
By: Konstantin Kaulen , Tobias Ladner , Stanley Bak and more
This report summarizes the 6th International Verification of Neural Networks Competition (VNN-COMP 2025), held as a part of the 8th International Symposium on AI Verification (SAIV), that was collocated with the 37th International Conference on Computer-Aided Verification (CAV). VNN-COMP is held annually to facilitate the fair and objective comparison of state-of-the-art neural network verification tools, encourage the standardization of tool interfaces, and bring together the neural network verification community. To this end, standardized formats for networks (ONNX) and specification (VNN-LIB) were defined, tools were evaluated on equal-cost hardware (using an automatic evaluation pipeline based on AWS instances), and tool parameters were chosen by the participants before the final test sets were made public. In the 2025 iteration, 8 teams participated on a diverse set of 16 regular and 9 extended benchmarks. This report summarizes the rules, benchmarks, participating tools, results, and lessons learned from this iteration of this competition.
Similar Papers
Floating-Point Neural Network Verification at the Software Level
Software Engineering
Checks if AI code is safe for important jobs.
Neural Network Verification is a Programming Language Challenge
Programming Languages
Makes sure computer brains work correctly.
VQualA 2025 Challenge on Visual Quality Comparison for Large Multimodal Models: Methods and Results
CV and Pattern Recognition
Helps computers judge picture quality better.