Score: 1

On the Sharp Input-Output Analysis of Nonlinear Systems under Adversarial Attacks

Published: May 16, 2025 | arXiv ID: 2505.11688v1

By: Jihun Kim, Yuchen Fang, Javad Lavaei

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Makes computers learn from messy, tricky information.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper is concerned with learning the input-output mapping of general nonlinear dynamical systems. While the existing literature focuses on Gaussian inputs and benign disturbances, we significantly broaden the scope of admissible control inputs and allow correlated, nonzero-mean, adversarial disturbances. With our reformulation as a linear combination of basis functions, we prove that the $l_1$-norm estimator overcomes the challenges as long as the probability that the system is under adversarial attack at a given time is smaller than a certain threshold. We provide an estimation error bound that decays with the input memory length and prove its optimality by constructing a problem instance that suffers from the same bound under adversarial attacks. Our work provides a sharp input-output analysis for a generic nonlinear and partially observed system under significantly generalized assumptions compared to existing works.

Country of Origin
🇺🇸 United States

Page Count
28 pages

Category
Mathematics:
Optimization and Control