Keywords: SiFEN, finite elements, simplicial mesh, piecewise polynomial, barycentric coordinates, Bernstein-Bezier, local function approximation, geometric deep learning, representation learning, calibration, approximation rates
TL;DR: Simplex-FEM Networks (SiFEN): a neural finite-element model that learns a simplicial mesh and local polynomials for single-simplex inference, controllable $C^r$ smoothness, FEM-rate approximation, improved calibration, and lower inference latency.
Abstract: We introduce Simplex-FEM Networks (SiFEN), a learned piecewise-polynomial predictor that represents $f:\mathbb{R}^d \to \mathbb{R}^k$ as a globally $C^r$ finite-element field on a learned simplicial mesh in an optionally warped input space. Each query activates exactly one simplex and at most $d+1$ basis functions via barycentric coordinates, yielding explicit locality, controllable smoothness, and cache-friendly sparsity. SiFEN pairs degree-$m$ Bernstein-Bezier polynomials with a light invertible warp and trains end-to-end with shape regularization, semi-discrete OT coverage, and differentiable edge flips. Under standard shape-regularity and bi-Lipschitz warp assumptions, SiFEN achieves the classic FEM approximation rate $M^{-m/d}$ with $M$ mesh vertices. Empirically, on synthetic approximation tasks, tabular regression/classification, and as a drop-in head on compact CNNs, SiFEN matches or surpasses MLPs and KANs at matched parameter budgets, improves calibration (lower ECE/Brier), and reduces inference latency due to geometric locality. These properties make SiFEN a compact, interpretable, and theoretically grounded alternative to dense MLPs and edge-spline networks.
Primary Area: learning on graphs and other geometries & topologies
Supplementary Material: zip
Submission Number: 1137
Loading