ProSwitch: Fine-Tuning Large Language Models to Generate Professional and Non-Professional Styled TextDownload PDF

Anonymous

16 Oct 2023ACL ARR 2023 October Blind SubmissionReaders: Everyone
Abstract: Large Language Models (LLMs) have been proven to be effective in various language tasks, such as text summarization and controlled text generation. However, research on the ability to switch between particular styles through fine-tuning LLMs is insufficient. In our study, we introduce an approach named ProSwitch to enable a language model to generate both professional and non-professional styled answers using knowledge-guided instruction tuning. ProSwitch is implemented in three stages: data preparation to gather domain knowledge and training set, instruction tuning to adjust language models with coarse and fine-grained instructions, and comprehensive evaluation to assess the professionalism discrimination and language quality of generated text. We compare the performance of ProSwitch with prevalent and specialized language models. The experimental results show that our approach achieves greater distinction between professional and non-professional text generation than the baseline models.
Paper Type: long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies

Loading