ProSwitch: Knowledge-Guided Instruction Tuning to Generate Professional and Non-Professional Styled Text

ACL ARR 2024 April Submission212 Authors

15 Apr 2024 (modified: 11 Jun 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have demonstrated efficacy in various linguistic applications, including text summarization and controlled text generation. However, studies into their capacity of switching between styles via instruction tuning remain underexplored. This study concentrates on style-switching abilities of LLMs and introduces a novel approach, named ProSwitch, which enables a language model to generate text with both professional and non-professional styles, by tuning and evaluating through the guidance of domain and style knowledge. ProSwitch unfolds across three phases: LLM-augmented preparation for gathering domain knowledge and QA pairs; instruction tuning for optimizing LLMs with multiple levels of instruction formats; and comprehensive evaluation for assessing both professionalism discrimination and reference-based quality of generated text. Comparative analysis of ProSwitch against general and specialized LLMs reveals that our approach outperforms baselines in switching between professional and non-professional text generation.
Paper Type: Long
Research Area: Generation
Research Area Keywords: Generation, Sentiment Analysis, Stylistic Analysis, and Argument Mining, Language Modeling, NLP Applications, Question Answering
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 212
Loading