Neural Message Passing for Multi-Label ClassificationDownload PDF

27 Sep 2018 (modified: 01 Feb 2019)ICLR 2019 Conference Blind SubmissionReaders: Everyone
  • Abstract: Multi-label classification (MLC) is the task of assigning a set of target labels for a given sample. Modeling the combinatorial label interactions in MLC has been a long-haul challenge. Recurrent neural network (RNN) based encoder-decoder models have shown state-of-the-art performance for solving MLC. However, the sequential nature of modeling label dependencies through an RNN limits its ability in parallel computation, predicting dense labels, and providing interpretable results. In this paper, we propose Message Passing Encoder-Decoder (MPED) Networks, aiming to provide fast, accurate, and interpretable MLC. MPED networks model the joint prediction of labels by replacing all RNNs in the encoder-decoder architecture with message passing mechanisms and dispense with autoregressive inference entirely. The proposed models are simple, fast, accurate, interpretable, and structure-agnostic (can be used on known or unknown structured data). Experiments on seven real-world MLC datasets show the proposed models outperform autoregressive RNN models across five different metrics with a significant speedup during training and testing time.
  • Keywords: Multi-label Classification, Graph Neural Networks, Attention, Graph Attention
  • TL;DR: We propose Message Passing Encoder-Decode networks for a fast and accurate way of modelling label dependencies for multi-label classification.
21 Replies