Abstract: Multi-view outlier detection has attracted rapidly growing attention to researchers due to its wide applications. However, most existing methods fail to detect outliers in more than two views. Moreover, they only employ the clustering technique to detect outliers in a multi-view scenario. Besides, the relationships among different views are not fully utilized. To address the above issues, we propose ECMOD for learning enhanced representations via contrasting for multi-view outlier detection. Technically, ECMOD leverages two channels, the reconstruction and the constraint view channels, to learn the multi-view data, respectively. The two channels enable ECMOD to capture the rich information better associated with outliers in a latent space due to fully considering the relationships among different views. Then, ECMOD integrates a contrastive technique between two groups of embeddings learned via the two channels, serving as an auxiliary task to enhance multi-view representations. Furthermore, we utilize neighborhood consistency to uniform the neighborhood structures among different views. It means that ECMOD has the ability to detect outliers in two or more views. Meanwhile, we develop an outlier score function based on different outlier types without clustering assumptions. Extensive experiments on real-world datasets show that ECMOD significantly outperforms most baselines.
0 Replies
Loading