Can Language Models Take A Hint? Prompting for Controllable Contextualized Commonsense InferenceDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Generating commonsense assertions, given a certain story context, is a tough challenge even for modern language models. One of the reasons for this may be that the model has to "guess" what topic or entity in a story to generate an assertion about. Prior work has tackled part of the problem, by providing techniques to align commonsense inferences with stories and training language generation models on these. However, none of the prior work provides means to control the parts of a generated assertion. In this work, we present "hinting", a data augmentation technique for improving inference of contextualized commonsense assertions. Hinting is a prefix prompting strategy that uses both hard and soft prompts. We demonstrate the effectiveness of hinting by showcasing its effect on two contextual commonsense inference datasets: ParaCOMET (Gabriel et al., 2021) and GLUCOSE (Mostafazadeh et al., 2020), for both general and context-specific inference.
Paper Type: long
0 Replies

Loading