Keywords: Neural Networks, Shape Analysis
TL;DR: A novel network architecture for point clouds that enables learning over multiple scales by operating on and communicating between all levels of a multigrid hierarchy.
Abstract: We introduce Geometric Multigrid Neural Networks (GMNN), a novel network structure for geometric deep learning on point clouds and surfaces. Convolutional neural networks face a common challenge: how can relevant features be communicated over longer distances? Our architecture facilitates long-distance communication with Geometric Multigrid Convolution (GMC) blocks, which apply convolutions in parallel to features defined on each scale of a multigrid, and enable communication all the way up and down the hierarchy. We observe two major structural advantages of such a network: First, because each GMC operates on every scale, even early stages can make use of coarse information and receptive field grows rapidly with depth. Second, networks built with this backbone have the freedom to route information between different scales, including in ways not possible for other architectures. Because of these advantages, we find that a GMNN can combine the fast training of a shallow network with the greater expressiveness of a deeper, larger network. We build a GMNN from the components of a state-of-the-art U-Net, and find that on real tasks it can match or exceed the accuracy of the base network while using fewer epochs and roughly half the parameter count.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 17890
Loading