Abstract: Conventional machine learning algorithms have traditionally been designed under the assumption that input data follows a vector-based format, with an emphasis on vector-centric paradigms. However, as the demand for tasks involving set-based inputs has grown, there has been a paradigm shift in the research community towards addressing these challenges. In recent years, the emergence of neural network architectures such as Deep Sets and Transformers has presented a significant advancement in the treatment of set-based data. These architectures are specifically engineered to naturally accommodate sets as input, enabling more effective representation and processing of set structures. Consequently, there has been a surge of research endeavors dedicated to exploring and harnessing the capabilities of these architectures for various tasks involving the approximation of set functions. This comprehensive survey aims to provide an overview of the diverse problem settings and ongoing research efforts pertaining to neural networks that approximate set functions. By delving into the intricacies of these approaches and elucidating the associated challenges, the survey aims to equip readers with a comprehensive understanding of the field. Through this comprehensive perspective, we hope that researchers can gain valuable insights into the potential applications, inherent limitations, and future directions of set-based neural networks. Indeed, from this survey we gain two insights: i) Deep Sets and its variants can be generalized by differences in the aggregation function, and ii) the behavior of Deep Sets is sensitive to the choice of the aggregation function. From these observations, we show that Deep Sets, one of the well-known permutation-invariant neural networks, can be generalized in the sense of a quasi-arithmetic mean.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: First of all, we would like to thank all the reviewers for their helpful comments. Their suggestions greatly helped us to improve our manuscript.
We have addressed their comments as best we could, and their modifications are noted in red text.
We list below the major changes made to the manuscript:
# 04 Feb 2024
### Addition of more recent studies
As Reviewer Mt8i suggested, we have tried to cover more recent studies. For example, in Subsection 3.9 we summarize the study and its variants, which proposed a mechanism that has also been adopted in recent years in the foundation models.
### Change of title
As reviewer wi7p mentioned, there was a discrepancy between the content of our first draft and the title. To address this, we have considered more appropriate title.
### Proposal of a new generalization of Deep Sets
As reviewer FuAm and reviewer wi7p suggested, we would like to make the manuscript better with some novelty, such as generalizing the concepts or proposing a new model architecture. Therefore, we suggested that Deep Sets and PointNet can be generalized using the quasi-arithmetic mean of parametric forms, and proposed a new generalized architecture based on it. Numerical experiments show that with proper optimization of the generalization parameter, it outperforms existing architectures in a reasonable computation time. Please see Section 7 for more details.
We hope that our corrections address the concerns of all reviewers.
# 05 Feb 2024
- Change permutation equivalence or permutation equivalent $\to$ permutation equivariance or permutation equivariant,
# 11 Mar 2024
- Remove "the" from title.
- Added citation to Proposition 4.1.
- Added proof for i) to Proposition 4.2.
- Clarified Proposition 4.2 statements. In addition, we have added additional assumption.
- Defined sum-decomposability in the part introducing Deep Sets.
- In Definition 2.2, we defined permutations on tuples.
- In Definition 2.5, we defined the function on tuples.
- Remove Definition 2.6.
- In Definition 2.7 (currently 2.6), we defined permutation equivariance on tuples. We fixed typos.
- We restate H\"{o}lder's Power Deep Sets as a previously unstudied special class of Deep Sets, rather than a novel generalization.
- Change statement: "Deep Sets and PointNet correspond to special cases with p = 1 and p = +∞, respectively." -> "p=1 and p=+\infty give Deep Sets and PointNet, respectively."
- In Subsection 7.1, we reworded 1 and 2 as expectations instead of conjectures.
- Added Subsection: "Limitations of Hölder’s Power Deep Sets".
- Change title
Assigned Action Editor: ~Seungjin_Choi1
Submission Number: 1793
Loading