Abstract: In recent years, significant progress has been made in image super-resolution through the use of large-scale models. However, the efficacy of these models comes at the cost of their substantial size, posing challenges and limitations when deploying them on resource-constrained devices. Despite their remarkable performance, the feasibility of employing such models on low-end devices has remained a contentious topic. In light of this, our research introduces a lightweight approach to image super-resolution, leveraging a simple recurrent neural network architecture consisting of a recurrent convolution block. Our proposed model uses less than 75k parameters, which is 10 times fewer than the state-of-the-art transformer-based super-resolution model. Despite its small size, the proposed model performs well in image super-resolution tasks both visually and quantitatively. Our work presents a promising direction for addressing the difficulty of deploying efficient super-resolution models on resource-limited devices.
Loading