Deterministic Strided and Transposed Convolutions for Point Clouds Operating Directly on the Points

11 May 2023 (modified: 12 Dec 2023)Submitted to NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: Farthest Point Sampling, Strided Convolutions, Point Clouds, Autoencoder, Auxiliary Selection Loss
Abstract: The application of Convolutional Neural Networks (CNNs) to process point cloud data as geometric representations of real objects has gained considerable attention. However, point clouds are less structured than images, which makes it difficult to directly transfer important CNN operations (initially developed for use on images) to point clouds. For instance, the order of a set of points does not contain semantic information. Therefore, ideally, all operations must be invariant to the point order. Inspired by CNN-related operations applied to images, we transfer the concept of strided and transposed convolutions to point cloud CNNs, enabling deterministic network modules to operate directly on points. To this end, we propose a novel strided convolutional layer with an auxiliary loss, which, as we prove theoretically, enforces a uniform distribution of the selected points within the lower feature hierarchy. This loss ensures a learnable and deterministic selection, unlike the iterative Farthest Point Sampling (FPS), which is commonly used in point cloud CNNs. The high flexibility of the proposed operations is evaluated by deploying them in exemplary network architectures and comparing their performances with those of similar (already existing) structures. Notably, we develop a light-weight autoencoder architecture based on our proposed operators, which shows the best generalization performance.
Supplementary Material: zip
Submission Number: 14494
Loading