Keywords: Human–AI rapport, In-group persona generation, Conversational agents, Rapport, Personalization
Abstract: LLM-based chatbots are increasingly applied in interpersonal domains such as counseling and peer support, where establishing human--AI rapport is crucial yet remains challenging. In this work, we introduce a novel approach for conditioning LLMs with \textbf{in-group personas}, which (i) first identifies a user’s primary concern and brief personal context (e.g., a computer science undergraduate worried about future career prospects), and (ii) generates a synthetic in-group persona that shares a similar primary concern while differing in background and narrative details, such as age or profession (e.g., a junior researcher at an AI startup). Furthermore, we conduct a human-subject study to systematically evaluate the effectiveness of in-group persona agents in enhancing human–AI rapport. We compare our approach against two baseline conditions: a conventional agent without persona conditioning and an agent exhibiting minimal self-disclosure (e.g., ``I've felt that too''). Results from post-task questionnaires assessing rapport and user experience indicate that the in-group persona agent significantly improves perceived rapport and personal relevance compared to the baselines, and also yields more positive user experience—most notably higher engagement.
Paper Type: Long
Research Area: Human-AI Interaction/Cooperation and Human-Centric NLP
Research Area Keywords: human-AI interaction/cooperation, human-centered evaluation, user-centered design
Contribution Types: NLP engineering experiment, Data analysis, Surveys
Languages Studied: English
Submission Number: 9574
Loading