The Card Shuffling Hypotheses: Building a Time and Memory Efficient Graph Convolutional NetworkDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: graph convolutional network, network compression, model acceleration, k-nearest nearst neighbor, card shuffling, 3D deep learning
Abstract: This paper investigates the design of time and memory efficient graph convolutional networks (GCNs). State-of-the-art GCNs adopt $K$-nearest neighbor (KNN) searches for local feature aggregation and feature extraction operations from layer to layer. Based on the mathematical analysis of existing graph convolution operations, we articulate the following two card shuffling hypotheses. (1) Shuffling the nearest neighbor selection for KNN searches in a multi-layered GCN approximately preserves the local geometric structures of 3D representations. (2) Shuffling the order of local feature aggregation and feature extraction leads to equivalent or similar composite operations for GCNs. The two hypotheses shed light on two possible directions of accelerating modern GCNs. That is, reasonable shuffling of the cards (neighbor selection or local feature operations) can significantly improve time and memory efficiency. A series of experiments show that the network architectures designed based on the proposed card shuffling hypotheses decrease both the time and memory consumption significantly (e.g., about 50% for point cloud classification and semantic segmentation), while maintaining comparable accuracy, on several important tasks in 3D deep learning, i.e., 3D classification, part segmentation, semantic segmentation, and mesh generation.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: This paper investigates the design of time and memory efficient graph convolutional networks (GCNs).
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=Cs4_fUBwc3
5 Replies

Loading