Semi-supervised classification by reaching consensus among modalitiesDownload PDF


Oct 21, 2018 (edited Sep 10, 2019)NIPS 2018 Workshop IRASL Blind SubmissionReaders: Everyone
  • Abstract: Deep learning has demonstrated abilities to learn complex structures, but they can be restricted by available data. Recently, Consensus Networks (CNs) were proposed to alleviate data sparsity by utilizing features from multiple modalities, but they too have been limited by the size of labeled data. In this paper, we extend CN to Transductive Consensus Networks (TCNs), suitable for semi-supervised learning. In TCNs, different modalities of input are compressed into latent representations, which we encourage to become indistinguishable during iterative adversarial training. To understand TCNs two mechanisms, consensus and classification, we put forward its three variants in ablation studies on these mechanisms. To further investigate TCN models, we treat the latent representations as probability distributions and measure their similarities as the negative relative Jensen-Shannon divergences. We show that a consensus state beneficial for classification desires a stable but imperfect similarity between the representations. Overall, TCNs outperform or align with the best benchmark algorithms given 20 to 200 labeled samples on the Bank Marketing and the DementiaBank datasets.
  • TL;DR: TCN for multimodal semi-supervised learning + ablation study of its mechanisms + interpretations of latent representations
  • Keywords: Transductive Consensus Networks, Interpretation of consensus
7 Replies