CPED-NCBFs: A Conformal Prediction for Expert Demonstration-based Neural Control Barrier Functions
By: Sumeadh MS, Kevin Dsouza, Ravi Prakash
Potential Business Impact:
Checks if AI safely follows rules.
Among the promising approaches to enforce safety in control systems, learning Control Barrier Functions (CBFs) from expert demonstrations has emerged as an effective strategy. However, a critical challenge remains: verifying that the learned CBFs truly enforce safety across the entire state space. This is especially difficult when CBF is represented using neural networks (NCBFs). Several existing verification techniques attempt to address this problem including SMT-based solvers, mixed-integer programming (MIP), and interval or bound-propagation methods but these approaches often introduce loose, conservative bounds. To overcome these limitations, in this work we use CPED-NCBFs a split-conformal prediction based verification strategy to verify the learned NCBF from the expert demonstrations. We further validate our method on point mass systems and unicycle models to demonstrate the effectiveness of the proposed theory.
Similar Papers
CP-NCBF: A Conformal Prediction-based Approach to Synthesize Verified Neural Control Barrier Functions
Systems and Control
Makes robots safer by checking their actions.
Scalable Verification of Neural Control Barrier Functions Using Linear Bound Propagation
Machine Learning (CS)
Makes AI safer by checking its decisions.
Learning Neural Control Barrier Functions from Expert Demonstrations using Inverse Constraint Learning
Artificial Intelligence
Teaches robots to avoid danger using examples.