LEARNING EMBEDDINGS OF NON-LINEAR PDES: THE BURGERS’ EQUATION

Published: 01 Mar 2026, Last Modified: 05 Mar 2026AI&PDE PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Physics-Informed Neural Networks, Multi-head architectures, Embeddings, Principal Component Analysis
TL;DR: A multi-head PINN learns a shared latent representation of Burgers’ equation whose orthogonalized PCA decomposition offers a stable diagnostic of effective dimensionality across families of solutions.
Abstract: Embeddings provide low-dimensional representations that organize complex function spaces and support generalization. They provide a geometric representation that supports efficient retrieval, comparison, and generalization. In this work we generalize the concept to Physics Informed Neural Networks. We present a method to construct solution embedding spaces of nonlinear partial differential equations using a multi-head setup, and extract non-degenerate information from them using principal component analysis (PCA). We test this method by applying it to viscous Burgers’ equation, which is solved simultaneously for a family of initial conditions and values of the viscosity. A shared network body learns a latent embedding of the solution space, while linear heads map this embedding to individual realizations. By enforcing orthogonality constraints on the heads, we obtain a principal-component decomposition of the latent space that is robust to training degeneracies and admits a direct physical interpretation. The obtained components for Burgers’ equation exhibit rapid saturation, indicating that a small number of latent modes captures the dominant features of the dynamics.
Journal Opt In: No, I do not wish to participate
Journal Corresponding Email: pedro.tarancon@fqa.ub.edu
Submission Number: 92
Loading