Teaching Smaller Language Models To Generalise To Unseen Compositional Questions

Published: 31 Aug 2023, Last Modified: 31 Aug 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We equip a smaller Language Model to generalise to answering challenging compositional questions that have not been seen in training. To do so we propose a combination of multitask supervised pretraining on up to 93 tasks designed to instill diverse reasoning abilities, and a dense retrieval system that aims to retrieve a set of evidential paragraph fragments. Recent progress in question-answering has been achieved either through prompting methods against very large pretrained Language Models in zero or few-shot fashion, or by fine-tuning smaller models, sometimes in conjunction with information retrieval. We focus on the less explored question of the extent to which zero-shot generalisation can be enabled in smaller models with retrieval against a corpus within which sufficient information to answer a particular question may not exist. We establish strong baselines in this setting for diverse evaluation datasets (StrategyQA, CommonsenseQA, IIRC, DROP, Musique and ARC-DA), and show that performance can be significantly improved by adding retrieval-augmented training datasets which are designed to expose our models to a variety of heuristic reasoning strategies such as weighing partial evidence or ignoring an irrelevant context.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Code repository de-anonymised, acknowledgements added and updated to camera ready version following acceptance.
Code: https://github.com/timhartill/unseen_questions
Assigned Action Editor: ~Karthik_R_Narasimhan1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 999
Loading