Scalable Formal Verification via Autoencoder Latent Space Abstraction
By: Robert Reed, Morteza Lahijanian, Luca Laurenti
Finite Abstraction methods provide a powerful formal framework for proving that systems satisfy their specifications. However, these techniques face scalability challenges for high-dimensional systems, as they rely on state-space discretization which grows exponentially with dimension. Learning-based approaches to dimensionality reduction, utilizing neural networks and autoencoders, have shown great potential to alleviate this problem. However, ensuring the correctness of the resulting verification results remains an open question. In this work, we provide a formal approach to reduce the dimensionality of systems via convex autoencoders and learn the dynamics in the latent space through a kernel-based method. We then construct a finite abstraction from the learned model in the latent space and guarantee that the abstraction contains the true behaviors of the original system. We show that the verification results in the latent space can be mapped back to the original system. Finally, we demonstrate the effectiveness of our approach on multiple systems, including a 26D system controlled by a neural network, showing significant scalability improvements without loss of rigor.
Similar Papers
Ensemble Visualization With Variational Autoencoder
Machine Learning (CS)
Shows weather patterns more clearly.
Geodesic Calculus on Latent Spaces
Machine Learning (CS)
Helps computers understand data shapes better.
Autoencoder-based Semi-Supervised Dimensionality Reduction and Clustering for Scientific Ensembles
Machine Learning (CS)
Helps scientists see patterns in complex data.