Deep Multi-View Learning via Task-Optimal CCADownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: multi-view, components analysis, CCA, representation learning, deep learning
TL;DR: Learn a projection to a shared latent space that is also discriminative, improving cross-view classification, regularization with a second view during training, and multi-view prediction.
Abstract: Canonical Correlation Analysis (CCA) is widely used for multimodal data analysis and, more recently, for discriminative tasks such as multi-view learning; however, it makes no use of class labels. Recent CCA methods have started to address this weakness but are limited in that they do not simultaneously optimize the CCA projection for discrimination and the CCA projection itself, or they are linear only. We address these deficiencies by simultaneously optimizing a CCA-based and a task objective in an end-to-end manner. Together, these two objectives learn a non-linear CCA projection to a shared latent space that is highly correlated and discriminative. Our method shows a significant improvement over previous state-of-the-art (including deep supervised approaches) for cross-view classification (8.5% increase), regularization with a second view during training when only one view is available at test time (2.2-3.2%), and semi-supervised learning (15%) on real data.
Code: https://drive.google.com/file/d/1Y53uvxmwNdaZIATGmyRkkLyhLf2TDmVl/view?usp=sharing
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1907.07739/code)
Original Pdf: pdf
10 Replies

Loading