Attack logics, not outputs: Towards efficient robustification of deep neural networks by falsifying concept-based properties
By: Raik Dankworth, Gesina Schwalbe
Potential Business Impact:
Makes AI understand things more logically and safely.
Deep neural networks (NNs) for computer vision are vulnerable to adversarial attacks, i.e., miniscule malicious changes to inputs may induce unintuitive outputs. One key approach to verify and mitigate such robustness issues is to falsify expected output behavior. This allows, e.g., to locally proof security, or to (re)train NNs on obtained adversarial input examples. Due to the black-box nature of NNs, current attacks only falsify a class of the final output, such as flipping from $\texttt{stop_sign}$ to $\neg\texttt{stop_sign}$. In this short position paper we generalize this to search for generally illogical behavior, as considered in NN verification: falsify constraints (concept-based properties) involving further human-interpretable concepts, like $\texttt{red}\wedge\texttt{octogonal}\rightarrow\texttt{stop_sign}$. For this, an easy implementation of concept-based properties on already trained NNs is proposed using techniques from explainable artificial intelligence. Further, we sketch the theoretical proof that attacks on concept-based properties are expected to have a reduced search space compared to simple class falsification, whilst arguably be more aligned with intuitive robustness targets. As an outlook to this work in progress we hypothesize that this approach has potential to efficiently and simultaneously improve logical compliance and robustness.
Similar Papers
A General Framework for Property-Driven Machine Learning
Machine Learning (CS)
Teaches computers to follow rules, not just guess.
Verifying rich robustness properties for neural networks
Logic in Computer Science
Makes AI decisions more trustworthy and reliable.
Proof Minimization in Neural Network Verification
Logic in Computer Science
Makes AI safety checks smaller and faster.