Abstract: Vision-based intelligent systems are extensively used in autonomous driving, traffic monitoring, and transportation surveillance due to their high performance, low cost, and ease of installation. However, their effectiveness is often compromised by adverse conditions such as haze, fog, low light, motion blur, and low resolution, leading to reduced visibility and increased safety risks. Additionally, the prevalence of high-definition imaging in embedded and mobile devices presents challenges related to the conflict between large image sizes and limited computing resources. To address these issues and enhance visual perception for intelligent systems operating under adverse conditions, this study proposes an all-in-one isomorphic dual-branch (IDB) framework consisting of two branches with identical structures for different functions, a loss-attention (LA) learning strategy, and feature fusion super-resolution (FFSR) module. The versatile IDB network employs a simple and effective encoder-decoder structure as the backbone for both branches, which can be replaced with task-specific tailored backbones. The plug-in LA strategy differentiates the functions of the two branches, adapting them to various tasks without increasing computational demands during inference. The FFSR module concatenates multi-scale features and restores details progressively in downsampled images, producing outputs with improved visibility, brightness, edge sharpness, and color fidelity. Extensive experimental results demonstrate that the proposed framework outperforms several state-of-the-art methods for image dehazing, low-light enhancement, image deblurring, and super-resolution image reconstruction while maintaining low computational overhead. The associated code is publicly available at https://github.com/lizhangray/IDBall.
Loading