Through the Lens of Neural Network: Analyzing Neural QA Models via Quantized Latent RepresentationDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Question Answering, Discrete Representation, Vector Quantization
Abstract: In recent years, deep learning models remain black boxes, where the decision-making process is still opaque to humans. In this work, we try to explore the probabilities of understanding how machine thinks when doing question-answering tasks. In general, words are represented by continuous latent representations in the neural-based QA models. Here we train the QA models with discrete latent representations, so each word in the context is also a token in the model. In this way, we can know what a word sequence in the context looks like through the lens of the QA models. We analyze the QA models trained on QuAC (Question Answering in Context) and CoQA (A Conversational Question Answering Challenge) and organize several rules the models obey when dealing with this kind of QA task. We also find that the models maintain much of the original performance after some hidden layers are quantized.
Original Pdf: pdf
4 Replies

Loading