Keywords: Scientific Machine Learning, Physics-informed methods, Mesh-free solvers, Partial Differential Equations, Interpretable machine learning, Efficient training algorithms, Data-driven modeling, Computational physics, Gaussian Mixture Models, Stiff PDEs, Singularly Perturbed Problems, Boundary Layers
TL;DR: We introduce the Gaussian Mixture Model Adaptive PIELM (GMM-PIELM), a probabilistic framework that learns a probability density function representing the ``location of physics'' for adaptively sampling kernels of PIELMs.
Abstract: Modeling stiff partial differential equations (PDEs) with sharp gradients remains a significant challenge for scientific machine learning. While Physics-Informed Neural Networks (PINNs) struggle with spectral bias and slow training times, Physics-Informed Extreme Learning Machines (PIELMs) offer a rapid, closed-form linear solution but are fundamentally limited by physics-agnostic, random initialization. We introduce the Gaussian Mixture Model Adaptive PIELM (GMM-PIELM), a probabilistic framework that learns a probability density function representing the ``location of physics'' for adaptively sampling kernels of PIELMs. By employing a weighted Expectation-Maximization (EM) algorithm, GMM-PIELM autonomously concentrates radial basis function centers in regions of high numerical error, such as shock fronts and boundary layers. This approach dynamically improves the conditioning of the hidden layer without the expensive gradient-based optimization(of PINNs) or Bayesian search. We evaluate our methodology on 1D singularly perturbed convection-diffusion equations with diffusion coefficients $\nu=10^{-4}$. Our method achieves $L_2$ errors up to $7$ orders of magnitude lower than baseline RBF-PIELMs, successfully resolving exponentially thin boundary layers while retaining the orders-of-magnitude speed advantage of the ELM architecture.
Journal Opt In: Yes, I want to participate in the IOP focus collection submission
Journal Corresponding Email: me22b102@smail.iitm.ac.in
Journal Notes: Future extensions of this work will focus on benchmarking the algorithm’s scalability and robustness by transitioning from 1D toy problems to complex 2D and 3D steady and unsteady cases. This expansion includes a comparative study against other importance sampling-based methods to quantify improvements in convergence and accuracy. Additionally, we plan to conduct a detailed ablation study on various probability density transforms and the impact of specific algorithmic parameters. Beyond empirical validation, we aim to provide a rigorous analysis of the underlying mechanics to offer a theoretical justification for why the method outperforms standard sampling in high-gradient regions.
Submission Number: 67
Loading