USDC: A Dataset of $\underline{U}$ser $\underline{S}$tance and $\underline{D}$ogmatism in Long $\underline{C}$onversations

24 May 2024 (modified: 13 Nov 2024)Submitted to NeurIPS 2024 Track Datasets and BenchmarksEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, annotators, user opinions, stance, dogmatism, human-llm alignment, open-source llms, closed-source llms
Abstract: Identifying user's opinions and stances in long conversation threads on various topics can be extremely critical for enhanced personalization, market research, political campaigns, customer service, conflict resolution, targeted advertising and content moderation. Hence, training language models to automate this task is critical. However, to train such models, gathering manual annotations has multiple challenges: 1) it is both time-consuming and costly; 2) conversation threads could be very long increasing chances of noisy annotations; and 3) interpreting instances where a user changes their opinion within a conversation is difficult because often times such transitions are subtle and not expressed explicitly. Inspired by the recent success of large language models (LLMs) for complex natural language processing (NLP) tasks, we leverage Mistral Large and GPT-4 for automating the human annotation process on the following two tasks while also providing reasoning: i) user Stance detection, which involves labeling a user's stance of a post in a conversation on a five-point scale; ii) user Dogmatism detection, which deals with labeling a user's overall opinion in the conversation on a four-point scale. Majority voting on zero-shot, one-shot, few-shot annotations from these two LLMs on 764 multi-user Reddit conversations helps us curate the USDC dataset. USDC is then used to finetune and instruction-tune multiple deployable small language models for the 5-class stance and 4-class dogmatism classification tasks. We make the code and dataset publicly available [https://anonymous.4open.science/r/USDC-0F7F].
Supplementary Material: pdf
Submission Number: 754
Loading