Modelling Long Range Dependencies in $N$D: From Task-Specific to a General Purpose CNNDownload PDF

Published: 01 Feb 2023, Last Modified: 27 Feb 2023ICLR 2023 posterReaders: Everyone
Keywords: convolutional neural networks, continuous convolutional kernels, CNNs, continuous parameterizations, sequential data, visual data, point-cloud data
TL;DR: We formulate a CNN architecture that can be used across input resolutions, lengths and dimensionalities (1D, 2D, 3D) showing its viability across several 1D, 2D and 3D tasks.
Abstract: Performant Convolutional Neural Network (CNN) architectures must be tailored to specific tasks in order to consider the length, resolution, and dimensionality of the input data. In this work, we tackle the need for problem-specific CNN architectures. We present the Continuous Convolutional Neural Network (CCNN): a single CNN able to process data of arbitrary resolution, dimensionality and length without any structural changes. Its key component are its continuous convolutional kernels which model long-range dependencies at every layer, and thus remove the need of current CNN architectures for task-dependent downsampling and depths. We showcase the generality of our method by using the same architecture for tasks on sequential ($1{\rm D}$), visual ($2{\rm D}$) and point-cloud ($3{\rm D}$) data. Our CCNN matches and often outperforms the current state-of-the-art across all tasks considered.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
25 Replies

Loading