Awakening Collective Wisdom: Elevating Super-Resolution Network Generalization through Cooperative Game Theory

17 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Super-resolution; image restoration; low-level vision
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Improving the generalization capability of image super-resolution algorithms is a fundamental challenge when deploying them in real-world scenarios. Prior methods often relied on the assumption that training on diverse data can improve generalization capabilities, leading to the development of complex degradation models that simulate real-world degradation.Unlike previous works, we present a novel training strategy grounded in cooperative game theory to improve the generalization capacity of existing image super-resolution algorithms. Within this framework, we conceptualize all neurons in the network as participants engaged in a cooperative relationship, where their collective responses determine the final prediction. As a solution, we propose to awaken suppressed neurons that hinder the generalization capability through our Erase-and-Awaken Training Strategy (EATS), thus fostering equitable contributions among all neurons and effectively improving generalization performance. EATS offers several compelling benefits.1) Seamless integration with existing architecture: It integrates with existing networks to enhance their generalization capability for unseen scenarios. 2) Theoretically feasible strategy: We theoretically prove the effectiveness of our strategy in enhancing the Shapely value (reflecting each participant's contributions to prediction). 3) Consistent performance improvements: Comprehensive experiments on various challenging datasets consistently demonstrate performance improvements when employing our strategy. The code will be publicly available.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 917
Loading