AdaSR: Adaptive Super Resolution for Cross Platform and Dynamic Runtime Environments

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: super resolution, neural networks, architecture search, compression
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We present the framework AdaSR which allows for super-resolution models to adapt inference latencies at run time in order to accommodate dynamic resource availability and cross-platform deployments.
Abstract: Image super resolution models (SR) have shown great capability in improving the visual quality for low-resolution images. Due to the compute and memory budgets of diverse platforms, e.g., cloud and edge devices, practitioners and researchers have to either (1) design different architectures and/or (2) compress the same model to different levels. Additionally, a majority of the works in current literature aim to achieve state-of-the-art performance by hand-designing singular efficient models. However, even on the same hardware, the compute resource dynamics change due to other running applications. As such, one single model that satisfies required frames-per-second (FPS) when executed in isolation may not be suitable when other running applications present. To overcome those issues, we propose AdaSR, an Adaptive SR framework via shared architecture and weights for cross platform deployment and dynamic runtime environment. Unlike other works in literature, our work focuses on the development of multiple models within a larger meta-graph such that they can fulfill latency requirements by compromising as little performance as possible. Particularly, AdaSR can be used to (1) customize architectures for different hardware (e.g., different security cameras), and (2) adaptively change the compute graph in dynamic runtime environment (e.g., mobile phones with concurrently running applications). Different than prior arts, AdaSR achieves this by adaptively changing the depth and the channel size with shared weights and architecture, which introduces no extra cost on memory and/or storage. To stabilize the shared weight training of AdaSR, we propose a progressive approach where we derive loss functions for each block and function matching operations with max-norm regularization to address dimension mismatches. We extensively test AdaSR on different block-based GAN models, and demonstrate that AdaSR can maintain Pareto optimal performance in terms of latency vs. performance tradeoff with much smaller memory footprint and support dynamic runtime environments.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3022
Loading