Abstract: Anchor-based methods are popular for their low computational complexity and high efficiency. Existing solutions either construct anchors on each view and fuse them, or directly obtain a consistent view structure. However, these strategies do not fully utilize the information between views. To address this, we propose a Dual-space Co-training (DSCMC) model for Large-scale Multi-view Clustering, which learns the consistent anchor graph using a dual-space co-training strategy. Specifically, we introduce an orthogonal projection matrix in the original space enabling the learned consistent anchor graph to capture the inherent relationships in each view. Meanwhile, the feature transformation matrix maps samples to a shared latent space, facilitating information alignment and comprehensive data distribution understanding. The proposed joint optimization strategy allows us to construct a discriminative anchor graph that effectively captures the essential features of multi-view data. Extensive experiments demonstrate that our method reduces computational complexity while outperforming existing approaches in clustering performance.
Loading