Towards Zero-Shot Functional Compositionality of Language ModelsDownload PDF

Anonymous

16 Oct 2022 (modified: 05 May 2023)ACL ARR 2022 October Blind SubmissionReaders: Everyone
Keywords: Composition, Language Model
Abstract: Large Pre-trained Language Models (PLM) have become the most desirable starting point in the field of NLP, as they have become remarkably good at solving many individual tasks. Despite such success, in this position paper, we argue (with a touch of empirical results) that current paradigms of working with PLMs are neglecting a critical aspect of modeling human intelligence. $\textbf{Functional compositionality}$ -- the ability to compose learned tasks -- has been a long-standing challenge in the field of AI (and many other fields) as it is considered one of the hallmarks of human intelligence. An illustrative example of such is cross-lingual summarization, where a bilingual person (English-French) could directly summarize an English document into French sentences $\textit{without}$ having to translate the English document or summary into French explicitly. We discuss why this matter is an important open problem that requires further attention from the field. Then, through various experiments on composite tasks, we show how far we are currently from attaining such human-level generalizability. Finally, we suggest several research directions that could push the field towards $\textit{zero-shot}$ functional compositionality of language models.
Paper Type: long
Research Area: Generation
0 Replies

Loading