Fine-grained Controllable Text Generation through In-context Learning with Feedback

ACL ARR 2024 June Submission1727 Authors

14 Jun 2024 (modified: 25 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We present a method for rewriting an input sentence to match specific values of nontrivial linguistic features, such as dependency depth. In contrast to earlier work, our method uses in-context learning rather than finetuning, making it applicable in use cases where data is sparse. We show that our model performs accurate rewrites and matches the state of the art on rewriting sentences to a specified school grade level.
Paper Type: Short
Research Area: Generation
Research Area Keywords: Controllable Text Generation, In-context learning
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 1727
Loading