RandONets: Shallow networks with random projections for learning linear and nonlinear operators

Published: 23 Jun 2025, Last Modified: 23 Jun 2025Greeks in AI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interpretable machine learning, Random projections, Shallow neural networks, Linear and nonlinear operatorsx, Numerical analysis
Abstract: Deep neural networks have been widely applied to solving both forward and inverse problems in dynamical systems. However, their implementation requires optimizing a high-dimensional space of parameters and hyperparameters, which, combined with substantial computational demands, poses challenges to achieving both high numerical accuracy and interpretability. To address these limitations, we introduce Random Projection-based Operator Networks (RandONets (https://doi.org/10.1016/j.jcp.2024.113433))—shallow networks incorporating random projections and specialized numerical analysis techniques to efficiently and accurately learn both linear and nonlinear operators. We prove that RandONets serve as universal approximators for non-linear operators. Their simplicity enables a direct one-step transformation of the input space, enhancing interpretability. To evaluate their performance, we focus on PDE operators, demonstrating that RandONets surpass "vanilla" DeepONets by several orders of magnitude in both numerical accuracy and computational efficiency.
Submission Number: 6
Loading