EMBRACE: Shaping Inclusive Opinion Representation by Aligning Implicit Conversations with Social Norms

ACL ARR 2025 July Submission315 Authors

27 Jul 2025 (modified: 31 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Shaping inclusive representations that embrace diversity and ensure fair participation and reflections of values is at the core of many conversation-based models. However, many existing methods rely on surface inclusion using mention of user demographics or behavioral attributes of social groups. Such methods overlook the nuanced, implicit expression of opinion embedded in conversations. Furthermore, the over-reliance on overt cues can exacerbate misalignment and reinforce harmful or stereotypical representations in model outputs. Thus, we took a step back and recognized that equitable inclusion needs to account for the implicit expression of opinion and use the stance of responses to validate the normative alignment. This study aims to evaluate how opinions are represented in NLP or computational models by introducing an alignment evaluation framework that foregrounds implicit, often overlooked conversations and evaluates the normative social views and discourse. Our approach models the stance of responses as a proxy for the underlying opinion, enabling a considerate and reflective representation of diverse social viewpoints. We evaluate the framework using both (i) positive-unlabeled (PU) online learning with base classifiers, and (ii) instruction-tuned language models to assess post-training alignment. Through this, we provide a basis for understanding how implicit opinions are (mis)represented and offer a pathway toward more inclusive model behavior.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: Computational Social Science and Cultural Analytics, Discourse and Pragmatics, Fairness
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Reassignment Request Area Chair: This is not a resubmission
Reassignment Request Reviewers: This is not a resubmission
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: Ethics Statement
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: 3.1 Data Collectio, and Appendix A.1 Annotation Process
B2 Discuss The License For Artifacts: Yes
B2 Elaboration: A.1 Annotation Process
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: A.1 Annotation Process, 3.1 Data Collection and 3 Experimental Setup
B4 Data Contains Personally Identifying Info Or Offensive Content: Yes
B4 Elaboration: A.1 Annotation Process, 3.1 Data Collection
B5 Documentation Of Artifacts: Yes
B5 Elaboration: A.1 Annotation Process, 3.1 Data Collection and anonymized link for peer Review in Appendix
B6 Statistics For Data: Yes
B6 Elaboration: Appendx B Table 6 and Table 4
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Appendix D, Table 7 and Table 8
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Appendix C and D, Table 7 and Table 8
C3 Descriptive Statistics: Yes
C3 Elaboration: Appendix B,C,D Table 3, 5, 6, 9 and 10
C4 Parameters For Packages: Yes
C4 Elaboration: Appendix C
D Human Subjects Including Annotators: Yes
D1 Instructions Given To Participants: Yes
D1 Elaboration: full guideline of annotation in appendx A.1 along with Anonymized link for peer Review
D2 Recruitment And Payment: Yes
D2 Elaboration: Appendix A.1
D3 Data Consent: Yes
D3 Elaboration: Appendix A and Appendix A.1
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: Yes
D5 Elaboration: Appendix A.1 including the pipline of annotation by Labelbox
E Ai Assistants In Research Or Writing: No
E1 Information About Use Of Ai Assistants: N/A
Author Submission Checklist: yes
Submission Number: 315
Loading