Putting the Con in Context: Identifying Deceptive Actors in the Game of MafiaDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=_XYt8vr9_wB
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: While neural networks demonstrate a remarkable ability to model linguistic content, capturing contextual information related to a speaker's conversational role is an open area of research. In this work, we analyze the effect of speaker role on language use through the game of Mafia, in which participants are assigned either an honest or a deceptive role. In addition to building a framework to collect a dataset of Mafia game records, we demonstrate that there are differences in the language produced by players with different roles. We confirm that classification models are able to rank deceptive players as more suspicious than honest ones based only on their use of language. Furthermore, we show that training models on two auxiliary tasks outperforms a standard BERT-based text classification approach. We also present methods for using our trained models to identify features that distinguish between player roles, which could be used to assist players during the Mafia game.
Dataset: zip
Response To Ethics Reviews (for Conditionally Accepted Papers Only): We greatly appreciate the reviewers’ comments on our paper. As there was a character limit on the given section of the submission form, we were unable to add our response to ethical concerns there. However, we acknowledge them, and in the camera-ready version have addressed both by adding a “Limitations and Potential Risks” section and revising the text and claims throughout. First, the revised paper directly addresses the fact that our results should not be expected to generalize to other settings in which deceptive language may be used, and are instead specific to the particular variant of Mafia that we studied. We believe that our work does inform the general problem of deception detection and that our modeling approach could be useful in other settings, but it is a genuine risk that one might expect the specific models we train or the experimental results we publish to apply more broadly than the specific game setting studied, so we have added text to mitigate that risk. Second, the revised paper directly addresses the question of whether and how the output of the model would aid honest or deceptive participants in a game. In this setting, the collective goal of the honest participants is to predict the truth, which is also the training objective of the model, and so we can expect that deception detection is fundamentally more useful to honest participants than deceptive ones. However, reliance on a model that has low accuracy poses a genuine risk. We have added text to emphasize that even in this particular setting, automatic deception detection is far from solved.
Presentation Mode: This paper will be presented in person in Seattle
Copyright Consent Signature (type Name Or NA If Not Transferrable): Samee Ibraheem
Copyright Consent Name And Address: University of California, Berkeley, Berkeley, CA
0 Replies

Loading