Abstract: Exploring shared and specific structural information effectively improves the performance of multi-view clustering, which has been proved by existing methods based on original data or shallow embedding. However, these methods fail to capture the latent shared and specific structure information in the deep embedding of multi-view data. This paper proposes a novel Deep Shared and Specific Fusion induced multi-view subspace clustering method (DSSF) to address this problem. A new fusion layer is designed in DSSF, which consists of two co-learning parts: a view-shared subspace for learning the latent shared structure information, and a view-specific subspace that captures the latent specific structure information of each view. Meanwhile, our method utilizes consistency and diversity regularization terms jointly to enhance multi-view clustering performance. The former allows the structure information of deep embedding to be maintained in the view-shared subspace, while the latter maximizes the difference between the view-specific subspace of each view to enhance complementarily Experimental results on several real-world datasets demonstrate the effectiveness of our method, where the ablation study shows that the joint regularization term significantly improves the clustering performance.
Loading