Random Undersampled Digital Elevation Model Super-Resolution Based on Terrain Feature-Aware Deep Learning Network
Abstract: The digital elevation model (DEM) provides important data support for geographic information analysis. However, due to the limitation of measurement cost and complex terrain, the collected DEMs often have randomly missing undersampled points and low sampling density. Neural networks have been shown to have the potential to reconstruct low-resolution DEM data into high-resolution data. Given that general images and DEMs have similar structural features, some researchers have attempted to directly apply general image super-resolution methods to DEM super-resolution tasks and achieve better results compared to traditional interpolation-based methods. However, DEM data and general images have unique characteristics and fundamental differences, making it inappropriate to directly apply image-based super-resolution methods to extract terrain features. Meanwhile, the general spatial interpolation algorithms based on deep learning usually have low model complexity and lack the specific designed loss function, which usually leads to significant interpolation errors. In order to solve these problems, we conducted in-depth research on the terrain feature patterns of DEMs and proposed a DEM super-resolution reconstruction network that is terrain feature-aware, named D-ResDCN. The network integrates a deep residual module, a deformable convolution module, and an upsampling module, which are used to extract deep terrain features, extract adaptive terrain features, and improve the resolution of DEM data, respectively. Furthermore, to accurately reconstruct the local detailed features of DEMs, we constructed a joint loss function with adaptive weight adjustment, which incorporates content loss, perceptual loss, and terrain feature loss. Experimental results show that compared with traditional spatial interpolation and classical super-resolution networks (SRCNN, SRResNet, SRGAN, and ESRGAN), our D-ResDCN model is comparable to the best performing SRCNN method in terms of peak signal-to-noise ratio and structural similarity index, while the performance in terms of mean absolute error and root-mean-square error is 10.5% and 10.1% lower than that of the SRResNet method, respectively.
Loading