On the Effectiveness of Minimal Context Selection for Robust Question AnsweringDownload PDF

Anonymous

30 Oct 2018 (modified: 05 May 2023)NIPS 2018 Workshop IRASL Blind SubmissionReaders: Everyone
Abstract: Machine learning models for question-answering (QA), where given a question and a passage, the learner must select some span in the passage as an answer, are known to be brittle. By inserting a single nuisance sentence into the passage, an adversary can fool the model into selecting the wrong span. A promising new approach for QA decomposes the task into two stages: (i) select relevant sentences from the passage; and (ii) select a span among those sentences. Intuitively, if the sentence selector excludes the offending sentence, then the downstream span selector will be robust. While recent work has hinted at the potential robustness of two-stage QA, these methods have never, to our knowledge, been explicitly combined with adversarial training. This paper offers a thorough empirical investigation of adversarial robustness, demonstrating that although the two-stage approach lags behind single-stage span selection, adversarial training improves its performance significantly, leading to an improvement of over 22 points in F1 score over the adversarially-trained single-stage model.
Keywords: Machine Reading Comprehension, Question Answering, Robustness, Adversarial Training
TL;DR: A two-stage approach consisting of sentence selection followed by span selection can be made more robust to adversarial attacks in comparison to a single-stage model trained on full context.
5 Replies

Loading