Privacy, Interpretability, and Fairness in the Multilingual SpaceDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Multilingual generalization or compression is an objective for cross-lingual models in natural language processing (NLP). We explore how the compression sought for in such models aligns with other common objectives in NLP such as performance, differential privacy, interpretability, and fairness. We show that compression, which can be quantified by, e.g., sentence retrieval or centered kernel alignment, is compatible with performance and privacy, but that performance and privacy are at odds, leading to non-linear interactions between compression, performance, and privacy. We also demonstrate that privacy is at odds with interpretability, leading to non-linear interactions between compression, privacy, and interpretability. Finally, while fairness and privacy are generally at odds, we show that in the multilingual space, fairness and privacy have common solutions. In sum, our study shows that if we want to learn multilingual models that exhibit good performance and good generalization properties, {\em and} are private, interpretable and fair (or any combination thereof), we need to jointly optimize for these inter-dependent objectives.
Paper Type: long
0 Replies

Loading