Control theory has produced, since the 1950s, a wealth of feedback designs with rigorous guarantees of stability, performance, robustness, and optimality. Some of these feedback laws are very complex and require intense numerical computations online.
Neural operators, a branch of Machine Learning with sophisticated tools and theory to approximate infinite-dimensional nonlinear mappings, offers a way to speed up the online implementation by an order of 1,000x, eliminating numerical computations by function evaluations, using NN approximations of the operators.
In about 2022 a line of research emerged in our group in collaboration with Prof. Miroslav Krstic's group, to facilitate the implementation of complex feedback laws using neural operators. This research, while incorporating the usual machine learning aspects (generation of a training set by numerical computation offline, training a neural operator) is also intensely theoretical. It produces guarantees that stability, performance, and robustness guarantees, present in classical control-theoretic work, are retained even under NN approximations.
Arguably, some of the most complex feedback laws out there are for PDEs, delay systems, and nonlinear systems. Our focus is on developing neural operators for such systems. The implementations are not limited to control laws but include also state estimators (observers), adaptive control, and (nonlinear) gain scheduling.
Defining nonlinear operators that need to be approximated.
Establishing the continuity (and even Lipschitzness) of these nonlinear infinite-dimensional mappings.
Proving guarantees of Lyapunov stability, performance, and robustness under the NN approximations.
Computational illustrations of training the neural operators and their performance in the feedback loop.
Journal Papers
[J1] Luke Bhan, Yuanyuan Shi, and Miroslav Krstic, βNeural operators for bypassing gain and control computations in PDE backstepping,β IEEE Transactions on Automatic Control, vol. 69, pp. 5310-5325, 2024.
[J2] Miroslav Krstic, Luke Bhan, and Yuanyuan Shi, "Neural operators of backstepping controller and observer gain functions for reaction-diffusion PDEs,β Automatica, paper 111649, 2024.
[J3] Maxence Lamarque, Luke Bhan, Yuanyuan Shi, and Miroslav Krstic, "Adaptive Neural-Operator Backstepping Control of a Benchmark Hyperbolic PDE", vol. 177, paper 112329, Automatica, 2025.
[J4] Luke Bhan, Yuanyuan Shi, and Miroslav Krstic, βAdaptive control of reaction-diffusion PDEs via neural operator-approximated gain kernels,β Systems & Control Letters, vol. 195, paper 105968, 2025.
[J5] Luke Bhan, Miroslav Krstic, and Yuanyuan Shi, "Neural Operator Predictors for Delay-Compensated Nonlinear Stabilization", IEEE Transactions on Automatic Control, under review.Β
Refereed Conference Papers
[C1] Yuanyuan Shi, Zongyi Li, Huan Yu, Drew Steeves, Anima Anandkumar, Miroslav Krstic, "Machine Learning Accelerated PDE Backstepping Observers", 61st IEEE Conference on Decision and Control (CDC), 2022.
[C2] Luke Bhan, Yuanyuan Shi, and Miroslav Krstic, "Operator Learning for Nonlinear Adaptive Control," Annual Learning for Dynamics and Control Conference (L4DC), 2023.
[C3] Luke Bhan, Yuanyuan Shi, Miroslav Krstic, "Neural Operators for Hyperbolic PDE Backstepping Kernels", 62nd IEEE Conference on Decision and Control (CDC), 2023.
[C4] Luke Bhan, Yuanyuan Shi, and Miroslav Krstic, "Neural Operators for Hyperbolic PDE Backstepping Feedback Laws", 62nd IEEE Conference on Decision and Control (CDC), 2023.
[C5] Luke Bhan, Yuanyuan Shi, Iasson Karafyllis, Miroslav Krstic, and James B Rawlings, "Moving-Horizon Estimators for Hyperbolic and Parabolic PDEs in 1-D", American Control Conference (ACC), 2024.
[C6] Luke Bhan, Yuexin Bian, Miroslav Krstic, and Yuanyuan Shi, "PDE Control Gym: A Benchmark for Data-Driven Boundary Control of Partial Differential Equations", Annual Learning for Dynamics and Control Conference (L4DC), 2024.
[C7] Sharath Matada, Luke Bhan, Yuanyuan Shi, and Nikolay Atansov, βGeneralizable Motion Planning via Operator Learning,β International Conference on Learning Representations (ICLR), 2025.
[C8] Luke Bhan, Peijia Qin, Miroslav Krstic, and Yuanyuan Shi, βNeural Operators for predictor feedback control of nonlinear delay systems,β Annual Learning for Dynamics and Control Conference (L4DC), 2025. (Best Paper Finalist)
Work by other authors
[A1] M. Krstic, βMachine learning: Bane or boon for control?,β IEEE Control Systems, vol. 44, pp. 24-37, 2024.
[A2] J. Qi, J. Zhang, and M. Krstic, βNeural operators for PDE backstepping control of first-order hyperbolic PIDE with recycle and delay,β System & Control Letters, paper 105714, 2024.
[A3] M. Lamarque, L. Bhan, R. Vazquez, and M. Krstic, βGain scheduling with a neural operator for a transport PDE with nonlinear recirculation,β IEEE Transactions on Automatic Control, to appear.
[A4] S.-S. Wang, M. Diagne, and M. Krstic, βBackstepping neural operators for 2x2 hyperbolic PDEs,β vol. 178, paper 112351, Automatica, 2025.
[A5] S.-S. Wang, M. Diagne, and M. Krstic, βDeep learning of delay-compensated backstepping for reaction-diffusion PDEs,β IEEE Transactions on Automatic Control, to appear.
[A6] K. Lv, J. Wang, Y. Zhang, and H. Yu, βNeural Operators for Adaptive Control of Freeway Traffic,β arXiv preprint arXiv:2410.20708, 2024.
[A7] Y. Zhang, J. Auriol, and H. Yu, βOperator Learning for Robust Stabilization of Linear Markov-Jumping Hyperbolic PDEs,β arXiv preprint arXiv:2412.09019, 2024.
[A8] Y. Jiang, J. Wang, βNeural operators of backstepping controller gain kernels for an ODE cascaded with a reaction-diffusion equation,β 43rd Chinese Control Conference (CCC), 2024.
[A9] K. Lv, J. Wang, Y. Cao, βNeural Operator Approximations for Boundary Stabilization of Cascaded Parabolic PDEs," International Journal of Adaptive Control and Signal Processing, 2024.
[A10] Y. Xiao, Y. Yuan, B. Luo, X. Xu, βNeural operators for robust output regulation of hyperbolic PDEs," Neural Networks, Volume 179, 2024.
[A11] X. Zhang, Y. Xiao, X. Xu, and B. Luo, βIntelligent Acceleration Adaptive Control of Linear 2x2 Hyperbolic PDE Systems,β arXiv preprint arXiv:2411.04461, 2024.
[A12] J. Hu, J. Qi, and J. Zhang, βNeural Operator based Reinforcement Learning for Control of First-order PDEs with Spatially-Varying State Delay,β arXiv preprint arXiv:2501.18201, 2025.
[A13] Y. Zhang, J. Auriol, and H. Yu, βNeural-Operator Control for Traffic Flow Models with Stochastic Demand,β in 5th IFAC Workshop on Control of Systems Governed by Partial Differential Equations (CPDE 2025), 2025.
[A14] Y. Zhang, R. Zhong, H. Yu, βMitigating stop-and-go traffic congestion with operator learning,β Transportation Research Part C: Emerging Technologies, Volume 170, 2025.
Example: Neural operators of PDE backstepping gain approximation for 1D hyperbolic PDEs [J1]
Hyperbolic PDEs: The top row shows the mapping from system parameters beta(x) to the controller gain k(x) learned via neural operator; bottom row showcases open-loop instability and closed-loop stability with the learned kernel.Β