Convergence rates for Penalised Least Squares estimators in PDE-constrained regression problemsDownload PDF

12 May 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: We consider PDE constrained nonparametric regression problems in which the parameter f is the unknown coefficient function of a second order elliptic partial differential operator Lf, and the unique solution uf of the boundary value problem Lfu=g1 on ,u=g2 on ∂, is observed corrupted by additive Gaussian white noise. Here  is a bounded domain in ℝd with smooth boundary ∂, and g1,g2 are given functions defined on ,∂, respectively. Concrete examples include Lfu=Δu−2fu (Schrödinger equation with attenuation potential f) and Lfu=div(f∇u) (divergence form equation with conductivity f). In both cases, the parameter space ={f∈Hα()|f>0}, α>0, where Hα() is the usual order α Sobolev space, induces a set of non-linearly constrained regression functions {uf:f∈}. We study Tikhonov-type penalised least squares estimators f̂ for f. The penalty functionals are of squared Sobolev-norm type and thus f̂ can also be interpreted as a Bayesian `MAP'-estimator corresponding to some Gaussian process prior. We derive rates of convergence of f̂ and of uf̂ , to f,uf, respectively. We prove that the rates obtained are minimax-optimal in prediction loss. Our bounds are derived from a general convergence rate result for non-linear inverse problems whose forward map satisfies a modulus of continuity condition, a result of independent interest that is applicable also to linear inverse problems, illustrated in an example with the Radon transform.
0 Replies

Loading