Pre-Training and Fine-Tuning Image Super-Resolution Models for Efficient Video Super-Resolution

20 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Image Super-Resolution; Efficient Video Super-Resolution; Pre-Training and Fine-Tuning
TL;DR: We propose a novel framework for adapting pre-trained image super-resolution (SR) models to tackle the challenging task of efficient video super-resolution
Abstract: In this paper, we propose a novel framework for adapting pre-trained image super-resolution (SR) models to tackle the challenging task of efficient video SR. This is achieved by freezing the pre-trained image SR model and fine-tuning it with the addition of several lightweight adapter modules. These adapters facilitate spatial and temporal learning, progressively equipping the image SR model with spatiotemporal reasoning capabilities for video SR. Also, these Adapters are compact and extendable, embedding only a few trainable parameters for each video dataset. Moreover, the parameters of the image SR model remain unchanged, resulting in substantial parameter sharing. This allows us to train video SR models quickly and efficiently. Remarkably, despite having significantly fewer parameters, our proposed method achieves competitive or even superior performance compared to existing video SR methods across multiple benchmarks.
Supplementary Material: zip
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2457
Loading