DSConv: Dynamic Convolution On Serialized Point Cloud

16 Sept 2024 (modified: 14 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Point cloud serialization; Point cloud analysis; 3D object classification; 3D semantic segmentation; Deep learning architectures
TL;DR: We propose a point cloud convolution method (DSConv) based on serialized points and utilize the position representation of points to enhance flexibility. We achieve excellent performance on multiple datasets, maintaining good throughput.
Abstract: In recent years, research on point-based architectures has advanced rapidly, showcasing their competitive performance. However, the unstructured nature of point clouds limits the application of effective operators such as convolutions in feature extraction. Although many works have attempted to address the issues of unstructured data and introduce convolutions or transformers, the complex spatial mappings of point clouds and cumbersome convolution implementations in these methods limit real-time performance of the model. Furthermore, excessive structural mapping ignores the independence of point cloud position representation and fails to capture finer-grained features. To tackle these challenges, we serialize point clouds to provide them with structure and introduce AdaConv to directly utilize 2D convolutions, which simplifies the process and better preserves the relative positional relationship. Additionally, we propose a novel dynamic refinement approach for point cloud positions, continuously modifying the coordinates of points within the convolutional neighborhood to enhance the flexibility and adaptability. We also integrate local and global features to compensate for the loss of point cloud features during downsampling. Finally, we propose DSConv based on PointNeXt, maintaining scalability and inference speed. By combining DSConv with new architectural designs, we outperform the current state-of-the-art methods on ScanObjectNN, Scannet V2, and S3DIS datasets.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1069
Loading