Pooling by Sliced-Wasserstein EmbeddingDownload PDF

Published: 09 Nov 2021, Last Modified: 05 May 2023NeurIPS 2021 PosterReaders: Everyone
Keywords: Pooling, feature aggregation, permutation invariance, optimal transport, set learning, representation learning
Abstract: Learning representations from sets has become increasingly important with many applications in point cloud processing, graph learning, image/video recognition, and object detection. We introduce a geometrically-interpretable and generic pooling mechanism for aggregating a set of features into a fixed-dimensional representation. In particular, we treat elements of a set as samples from a probability distribution and propose an end-to-end trainable Euclidean embedding for sliced-Wasserstein distance to learn from set-structured data effectively. We evaluate our proposed pooling method on a wide variety of set-structured data, including point-cloud, graph, and image classification tasks, and demonstrate that our proposed method provides superior performance over existing set representation learning approaches. Our code is available at https://github.com/navid-naderi/PSWE.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
TL;DR: We present a novel pooling method, called Pooling by Sliced-Wasserstein Embedding (PSWE), which leverages the (generalized) sliced-Wasserstein distances to map an input set of arbitrary cardinality to a fixed-size representation.
Supplementary Material: pdf
Code: https://github.com/navid-naderi/PSWE
14 Replies

Loading