Speaker
Description
Neural Operators such as DeepONets have been recently introduced to approximate nonlinear operators with a focus on solution operators of PDEs. However, their implementation requires the use of deep neural networks whose training is performed in a high-dimensional space of parameters and hyperparameters. This, coupled with the need for significant computational resources, creates challenges for achieving high numerical accuracy at limited resources. Very recently, inspired by DeepONets, we have introduced RandONets [1] shallow networks embedding the input space with random projections, thus using specialized numerical analysis techniques from linear algebra for their training to efficiently and accurately learn linear and nonlinear operators. We prove that RandONets are universal approximators of nonlinear operators. Furthermore, we assess their performance with a focus on the approximation of evolution operators (right-hand-sides (RHS)) of PDEs. We demonstrate that, RandONets outperform DeepONets by several orders of magnitude both in terms of numerical approximation accuracy and computational cost. In summary, our work shows that carefully designed “light” neural networks, aided by tailor-made numerical analysis methods, can provide significantly faster and more accurate approximations of nonlinear operators compared to deep neural networks.
[1] Fabiani, G., Kevrekidis, I. G., Siettos, C., & Yannacopoulos, A. N. (2025). RandONets: Shallow networks with random projections for learning linear and nonlinear operators. Journal of Computational Physics, 520, 113433.