Can Language Models Take A Hint? Prompting for Controllable Contextualized Commonsense InferenceDownload PDF

Anonymous

16 Oct 2021 (modified: 05 May 2023)ACL ARR 2021 October Blind SubmissionReaders: Everyone
Abstract: Generating commonsense assertions, given a certain story context, is a tough challenge even for modern language models. One of the reasons for this may be that the model has to "guess" what topic or entity in a story to generate an assertion about. Prior work has tackled part of the problem, by providing techniques to align commonsense inferences with stories and training language generation models on these. However, none of the prior work provides means to control the parts of a generated assertion. In this work, we present "hinting", a data augmentation technique for improving inference of contextualized commonsense assertions. Hinting is a prefix prompting strategy that uses both hard and soft prompts. We demonstrate the effectiveness of hinting by showcasing its effect on two contextual commonsense inference frameworks: ParaCOMET and GLUCOSE, for both general and context-specific inference.
0 Replies

Loading