Improving Deep Learning-Based Height Estimation from Single SAR Images by Injecting Sensor Parameters
Abstract: The deep learning-based estimation of topographic heights from single remote sensing images has shown great potential in recent years. Drawing inspiration from the computer vision task of single image depth estimation, the translation from the input remote sensing image to a height image via convolutional neural networks lies at the core of the approaches published so far. Most of the existing works, however, neglect the fact that remote sensing data comes from well-calibrated sensors carried by satellites flying in well-controlled orbits. Thus, a lot of high-quality meta-information is available for most remote sensing images, which can be used to provide the pure deep neural network with physically meaningful auxiliary information. This holds particularly for synthetic aperture radar (SAR) sensors, which use active imaging technology and are thus largely independent from external conditions. In this paper, we investigate whether the inclusion of the radar viewing angle, which is a critical sensor parameter in SAR imaging, provides a benefit for deep learning-based single-image height estimation from VHR SAR data.
Loading