Abstract: Empowering Large Language Models (LLMs) with distinct human-like personality traits has become an innovative task for developing advanced dialog systems. Although LLMs demonstrate impressive capabilities in following instructions, directly prompting them to exhibit certain personalities through manually crafted instructions may result in sub-optimal performance. In this paper, we propose a plug-and-play prompting method to manipulate the LLMs' personality traits. Specifically, we append discrete personalized suffixes, automatically generated through an aggregated gradient-based search method, to the user query or dialog histories and induce LLMs to respond with target personalities. In addition, due to the high redundancy of the search space, we adopt a reward-based strategy to prune the vocabulary and focus exclusively on influential tokens. Experiment results on four models ranging from $1.1$B to $13$B show that our method achieves $79.9$\% accuracy in customizing LLMs' personalities, significantly outperforming other prompting methods ($65.5\%$) and model editing methods. Our method also excels in generation fluency and quality with the lowest generation perplexity and the highest GPT-4 evaluation scores.
Paper Type: long
Research Area: Dialogue and Interactive Systems
Contribution Types: NLP engineering experiment, Reproduction study
Languages Studied: English
Preprint Status: We are considering releasing a non-anonymous preprint in the next two months (i.e., during the reviewing process).
A1: yes
A1 Elaboration For Yes Or No: 7
A2: yes
A2 Elaboration For Yes Or No: 8
A3: yes
A3 Elaboration For Yes Or No: 1
B: yes
B1: yes
B1 Elaboration For Yes Or No: 4
B2: n/a
B3: n/a
B4: n/a
B5: n/a
B6: yes
B6 Elaboration For Yes Or No: Appendix A
C: yes
C1: yes
C1 Elaboration For Yes Or No: section 4.1, 5.4
C2: yes
C2 Elaboration For Yes Or No: section 4.1, 5.4
C3: yes
C3 Elaboration For Yes Or No: 4
C4: yes
C4 Elaboration For Yes Or No: 4
D: no
E: yes
E1: yes
E1 Elaboration For Yes Or No: 4, 5
0 Replies
Loading