Adaptive Resolution Residual Networks — Generalizing Across Resolutions Easily and Efficiently

TMLR Paper3947 Authors

10 Jan 2025 (modified: 22 Mar 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The majority of signal data captured in the real world uses numerous sensors with different resolutions. In practice, however, most deep learning architectures are fixed-resolution; they consider a single resolution at training time and inference time. This is convenient to implement but fails to fully take advantage of the diverse signal data that exists. In contrast, other deep learning architectures are adaptive-resolution; they directly allow various resolutions to be processed at training time and inference time. This benefits robustness and computational efficiency but introduces difficult design constraints that hinder mainstream use. In this work, we address the shortcomings of both approaches by introducing Adaptive Resolution Residual Networks (ARRNs), which inherit the advantages of adaptive-resolution methods and the ease of use of fixed-resolution methods. We construct ARRNs from Laplacian residuals, which serve as generic adaptive-resolution adapters for fixed-resolution layers, and which allow instantly casting high-resolution ARRNs into low-resolution ARRNs by simply omitting Laplacian residuals, thus reducing computational cost. We guarantee this yields numerically identical evaluation on low-resolution signals when using perfect smoothing kernels. We complement this novel component with Laplacian dropout, which regularizes for robustness to a distribution of lower resolutions and regularizes for numerical errors that may be induced by approximate smoothing kernels. We provide a solid grounding for the advantageous properties of ARRNs through a theoretical analysis based on neural operators, and empirically show that ARRNs embrace the challenge posed by diverse resolutions with greater flexibility, robustness, and computational efficiency.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We have improved our introduction by better conveying the advantages and limitations of *fixed-resolution methods* and *adaptive-resolution methods with a varying sampling window*. We have also linked Laplacian pyramids constructed using Whittaker-Shannon smoothing kernels to Shannon wavelet decompositions in our background section.
Assigned Action Editor: ~Yunhe_Wang1
Submission Number: 3947
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview