Joint Defocus Deblurring and Superresolution Learning Network for Autonomous Driving

Published: 01 Jan 2024, Last Modified: 13 May 2025IEEE Intell. Transp. Syst. Mag. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the development of autonomous driving and computer vision, the importance of high-quality images is increasingly prominent. However, in practical applications, due to lighting, device response speed, distance, and other factors, the image captured by the onboard camera has a variety of degradations. One of the most common degradations is the combination of defocus blur and low resolution. But current image reconstruction methods are almost always used for images with single-form degradation; none of them can handle the low-resolution (LR) defocused image well. Therefore, we propose a new task for the defocus blur and LR composite situation and give a novel model: Joint Learning of Defocus Deblurring with Super-Resolution Network (J-D 2 SR). This model includes two subnetworks: an auxiliary network, HR Defocus Map Estimation Network (HRDME-Net), and a main network, Super-Resolution Reconstruction Network (SRR-Net). The auxiliary net is used to predict the high-resolution (HR) defocus map and let the model understand the global defocus blur distribution, and then, the defocus map and the features of the auxiliary net are sent to the main net to assist the image reconstruction task. We verify the performance of our model on defocus blur and superresolution (SR) datasets and achieve state-of-the-art performance both quantitatively and qualitatively; the experimental results demonstrate the effectiveness of our method.
Loading