Limits of Resolution Equivariance in Fourier Neural Operators

Published: 01 Mar 2026, Last Modified: 05 Mar 2026AI&PDE PosterEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
Keywords: Fourier Neural Operators, Resolution Equivariance
TL;DR: We show that FNOs trained on coarse grids often perform no better, and sometimes, worse on finer grids than when their coarse predictions are simply Fourier-upsampled, due to spectral truncation and aliasing.
Abstract: Fourier Neural Operators are often assumed to generalize across spatial resolutions, enabling training on a coarse grid and deployment on a finer grid. We test this assumption by contrasting two inference-time choices when moving from training resolution $s$ to test resolution $S>s$: running FNO directly at $S$, or running at $s$ and upsampling the prediction to $S$ via Fourier zero-padding. On Darcy flow, we observe that direct fine-grid inference is not reliably beneficial and can be worse than the low-grid-plus-upsampling baseline. We further analyze layerwise spectra and find that, under Fourier truncation, intermediate representations increasingly concentrate energy in low frequencies, with high-frequency output produced mainly by late nonlinear/decoder stages. This offers a mechanistic explanation for why FNO can perform well while retaining few modes, yet remain sensitive under resolution shifts. Our findings highlight a simple but strong baseline for cross-resolution evaluation and point to nonlinear aliasing as a key obstacle to zero-shot resolution equivariance.
Journal Opt In: Yes, I want to participate in the IOP focus collection submission
Journal Corresponding Email: alex.colagrande@dauphine.psl.eu
Submission Number: 133
Loading