Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size InputsDownload PDF

27 Sept 2018, 22:38 (modified: 10 Feb 2022, 11:33)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Keywords: representation learning, permutation invariance, set functions, feature pooling
TL;DR: We propose Janossy pooling, a method for learning deep permutation invariant functions designed to exploit relationships within the input sequence and tractable inference strategies such as a stochastic optimization procedure we call piSGD
Abstract: We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.
Code: [![github](/images/github_icon.svg) PurdueMINDS/JanossyPooling](https://github.com/PurdueMINDS/JanossyPooling) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=BJluy2RcFm)
Data: [PPI](https://paperswithcode.com/dataset/ppi), [Pubmed](https://paperswithcode.com/dataset/pubmed)
13 Replies

Loading