Stark: Social Long-Term Multi-Modal Conversation with Persona Commonsense Knowledge

ACL ARR 2024 June Submission1051 Authors

14 Jun 2024 (modified: 02 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Humans share a wide variety of images related to their personal experiences within conversations via instant messaging tools. However, existing works focus on (1) image-sharing behavior in singular sessions, leading to limited long-term social interaction, and (2) a lack of personalized image-sharing behavior. In this work, we introduce \dataset, a large-scale long-term multi-modal dialogue dataset that covers a wide range of social personas in a multi-modality format, time intervals, and images. To construct \datasetName automatically, we propose a novel multi-modal contextualization framework, \frameworkName, that generates long-term multi-modal dialogue distilled from ChatGPT and our proposed \planExecute image aligner. Using our \dataset, we train a multi-modal conversation model, \model 7B, which demonstrates impressive visual imagination ability. Furthermore, we demonstrate the effectiveness of our dataset in human evaluation. The code, dataset, and model will be publicly released after publication.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: Long-Term Dialogue,Multi-Modal Dialogue,Image-Sharing Behavior
Contribution Types: Data resources
Languages Studied: English
Submission Number: 1051
Loading