Abstract: Authorship style transfer seeks to adapt the style of a neutral text to reflect the speaking/writing manner of a specific person. While traditional methods excel at transforming clearly defined styles, like positive or negative, they face challenges with authorship styles. Large language models (LLMs) offer potential solutions, yet struggle with rarely encountered authorship styles during pre-training. This paper introduces an inverse knowledge distillation method, utilizing LLMs to distill (neutral text, stylized text) pairs by removing styles from existing stylized texts---made easier by the abundance of neutral texts during pre-training. Using the distilled corpus, we train a compact and deployment-friendly model tailored to the desired style. Experimental results across four authorship-stylized datasets demonstrate the superiority of the proposed inverse knowledge distillation over conventional style transfer approaches and forward transfer on LLMs. Our dataset and code are available at https://github.com/AnonymousRole/Lifelike-Writer.
Paper Type: long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English; Chinese
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies
Loading