Local Observability of a Class of Feedforward Neural Networks
By: Yi Yang, Victor G. Lopez, Matthias A. Müller
Potential Business Impact:
Helps computers learn better by watching how they learn.
Beyond the traditional neural network training methods based on gradient descent and its variants, state estimation techniques have been proposed to determine a set of ideal weights from a control-theoretic perspective. Hence, the concept of observability becomes relevant in neural network training. In this paper, we investigate local observability of a class of two-layer feedforward neural networks~(FNNs) with rectified linear unit~(ReLU) activation functions. We analyze local observability of FNNs by evaluating an observability rank condition with respect to the weight matrix and the input sequence. First, we show that, in general, the weights of FNNs are not locally observable. Then, we provide sufficient conditions on the network structures and the weights that lead to local observability. Moreover, we propose an input design approach to render the weights distinguishable and show that this input also excites other weights inside a neighborhood. Finally, we validate our results through a numerical example.
Similar Papers
From Black-Box to White-Box: Control-Theoretic Neural Network Interpretability
Machine Learning (CS)
Shows how computer brains work inside.
Identifying Network Structure of Linear Dynamical Systems: Observability and Edge Misclassification
Systems and Control
Finds hidden connections in networks from few clues.
Observability conditions for neural state-space models with eigenvalues and their roots of unity
Machine Learning (CS)
Makes AI learn faster and better.