Keywords: Compressive Imaging, Compressive Sensing, SpaRSA algorithm, Mixture-of-Experts
TL;DR: This work is the first attempt to explore the deep unfolding paradigm within the Mixture-of-Experts framework for Compressive Imaging.
Abstract: Deep Unfolding-based Networks (DUNs) have attracted attention due to their high performance and a certain degree of interpretability. However, existing DUNs often lack flexibility in handling details and features in different images during reconstruction, as they typically involve multiple iterative modules cascading through the same structure for each iteration. To address this limitation, we propose DUMoE, a novel sparsely-activated Deep Unfolding Mixture-of-Experts (MoE) architecture for Compressive Imaging (CI). By integrating the deep unfolding paradigm into the MoE, we enable DUMoE to adaptively reconstruct various images by utilizing different experts at each iteration stage. Specifically, we unfold traditional SpaRSA iterations into experts within DUMoE and employ top-1 switch routing to save computational consumption and enhance flexibility. Additionally, we introduce the Degradation-Aware Mask within the self-attention mechanism to prioritize image degradation caused by dimensionality reduction in CI, thereby enhancing reconstruction fidelity. Moreover, we incorporate the Multi-Scale Gate to improve the DUMoE's adaptability to image features at different scales and facilitate information transmission across iteration stages. Extensive experiments across various CI recovery tasks, including natural image compressive sensing, magnetic resonance imaging, and snapshot compressive imaging, demonstrate the superior performance and effectiveness of DUMoE. To the best of our knowledge, we are the first to leverage the deep unfolding paradigm within the MoE framework.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4755
Loading