On the Sharp Input-Output Analysis of Nonlinear Systems under Adversarial Attacks
By: Jihun Kim, Yuchen Fang, Javad Lavaei
Potential Business Impact:
Makes computers learn from messy, tricky information.
This paper is concerned with learning the input-output mapping of general nonlinear dynamical systems. While the existing literature focuses on Gaussian inputs and benign disturbances, we significantly broaden the scope of admissible control inputs and allow correlated, nonzero-mean, adversarial disturbances. With our reformulation as a linear combination of basis functions, we prove that the $l_1$-norm estimator overcomes the challenges as long as the probability that the system is under adversarial attack at a given time is smaller than a certain threshold. We provide an estimation error bound that decays with the input memory length and prove its optimality by constructing a problem instance that suffers from the same bound under adversarial attacks. Our work provides a sharp input-output analysis for a generic nonlinear and partially observed system under significantly generalized assumptions compared to existing works.
Similar Papers
System Identification from Partial Observations under Adversarial Attacks
Optimization and Control
Protects computer systems from sneaky attacks.
A New Approach to Controlling Linear Dynamical Systems
Systems and Control
Makes robots learn faster, even when things go wrong.
Bridging Batch and Streaming Estimations to System Identification under Adversarial Attacks
Optimization and Control
Protects machines from sneaky attacks.