When Does Syntax Mediate Neural Language Model Performance? Evidence from Dropout ProbesDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Recent causal probing literature reveals when language models and syntactic probes use similar representations. Such techniques may yield ``false negative'' causality results: models may use representations of syntax, but probes may have learned to use redundant encodings of the same syntactic information. We demonstrate that models do encode syntactic information redundantly and introduce a new probe design that guides probes to consider all syntactic information present in embeddings. Using these probes, we find evidence for the use of syntax in models where prior methods did not, allowing us to boost model performance by injecting syntactic information into representations.
Paper Type: long
0 Replies

Loading