Abstract: While several benchmarks exist for reasoning tasks, reasoning across domains is an under-explored area in NLP. Towards this, we present a dataset and a prompt-template-filling approach to enable sequence to sequence models to perform cross-domain reasoning. We also present a case-study with commonsense and health and well-being domains, where we study how prompt-template-filling enables pretrained sequence to sequence models across domains. Our experiments across several pretrained encoder-decoder models show that cross-domain reasoning is challenging for current models. We also show an in-depth error analysis and avenues for future research for reasoning across domains
Paper Type: long
0 Replies
Loading