Abstract: A quote tweet enables users to share others' content while adding their own commentary. In order to enhance public engagement through quote tweets, we investigate the task of generating popular quote tweets. This task aims to produce quote tweets that garner higher popularity, as indicated by increased likes, replies, and retweets. Despite the impressive language generation capabilities of large language models (LLMs), there has been limited research on how LLMs can effectively learn the popularity of text to better engage the public. Therefore, we introduce a novel approach called Response-augmented Popularity-Aligned Language Model (RePALM), which aligns language generation with popularity by leveraging insights from augmented auto-responses provided by readers. We utilize the Proximal Policy Optimization framework with a dual-reward mechanism to jointly optimize for the popularity of the quote tweet and its consistency with the auto-responses. In our experiments, we collected two datasets consisting of quote tweets containing external links and those referencing others' tweets. Extensive results demonstrate the superiority of RePALM over advanced language models that do not incorporate response augmentation.
Paper Type: long
Research Area: Computational Social Science and Cultural Analytics
Contribution Types: Data resources
Languages Studied: English
Preprint Status: We plan to release a non-anonymous preprint in the next two months (i.e., during the reviewing process).
A1: yes
A2: yes
A3: yes
B: no
C: yes
D: yes
E: yes
E1: yes
0 Replies
Loading