Improving Compositionality of Neural Networks by Decoding Representations to InputsDownload PDF

21 May 2021, 20:47 (edited 21 Feb 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: generative models, decoder, representations, out-of-distribution, compositionality
  • TL;DR: Using generative models, we jointly optimize neural network activations to decode back to inputs, enabling a form of compositionality in neural networks that is useful for real-world applications.
  • Abstract: In traditional software programs, it is easy to trace program logic from variables back to input, apply assertion statements to block erroneous behavior, and compose programs together. Although deep learning programs have demonstrated strong performance on novel applications, they sacrifice many of the functionalities of traditional software programs. With this as motivation, we take a modest first step towards improving deep learning programs by jointly training a generative model to constrain neural network activations to "decode" back to inputs. We call this design a Decodable Neural Network, or DecNN. Doing so enables a form of compositionality in neural networks, where one can recursively compose DecNN with itself to create an ensemble-like model with uncertainty. In our experiments, we demonstrate applications of this uncertainty to out-of-distribution detection, adversarial example detection, and calibration --- while matching standard neural networks in accuracy. We further explore this compositionality by combining DecNN with pretrained models, where we show promising results that neural networks can be regularized from using protected features.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
19 Replies