Abstract: Multi-attribute controlled text generation (CTG) requires models to generate sentences with prespecified attributes. Previous works often utilize the corresponding single-attribute data to train the multi-attribute generators. However, exploring the type (mainly sentiment and topic attributes in the English language) and number (up to three) of attributes is still limited, since the cost of data collection also increases significantly if new attributes emerge. Benefiting from recent advanced large language models (LLMs), we experimentally reveal that LLMs with standard promptings could get promising performances on multi-attribute CTG tasks without any single-attribute data. However, utilizing standard promptings often suffers from problems of missing/misunderstanding attributes. To address these concerns, our basic idea is to help LLMs better understand attributes and plan the generated content before the final completions, just as human writers do. As a result, the proposed CoW, a Chain-of-Writing prompting, hints LLMs conduct multi-attribute CTG in a step-by-step manner. Following the think-plan-write order, CoW decomposes the task into three corresponding sub-steps, and uses discrete promptings to encourage LLMs to generate auxiliary information, such as explaining the meanings of attributes and creating a storyline. Experiments on three generation tasks demonstrate that CoW could achieve general improvements on up to seven attributes, and these empirical results could provide novel insight to greatly expand the task settings of multi-attribute CTG.
Paper Type: long
Research Area: Generation
Contribution Types: Approaches to low-resource settings
Languages Studied: English,Chinese
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies
Loading