Generalized structure-aware missing view completion network for incomplete multi-view clusteringDownload PDF

22 Sept 2022, 12:31 (modified: 13 Feb 2023, 23:31)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Incomplete multi-view clustering, Missing view imputation, Representation learning, Deep neural network
TL;DR: A general incomplete multi-view clustering framework via missing view completion and recurrent graph constraint.
Abstract: In recent years, incomplete multi-view clustering has been widely regarded as a challenging problem. The missing views inevitably damage the effective information of the multi-view data itself. To date, existing methods for incomplete multi-view clustering usually bypass invalid views according to prior missing information, which is considered as a second-best scheme based on evasion. Other methods that attempt to recover missing information are mostly applicable to specific two-view datasets. To handle these problems, we design a general structure-aware missing view completion network (SMVC) for incomplete multi-view clustering. Concretely, we build a two-stage autoencoder network with the self-attention structure to synchronously extract high-level semantic representations of multiple views and recover the missing data. In addition, we develop a recurrent graph reconstruction mechanism that cleverly leverages the restored views to promote the representation learning and the further data reconstruction. Sufficient experimental results confirm that our SMVC has obvious advantages over other top methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies