All Models are Wrong, But Some are Deadly: Inconsistencies in Emotion Detection in Suicide-related TweetsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: An extensive application and evaluation of LMs in mirroring the human cognitive abilities in comprehending the context of suicide-related tweets.
Abstract: Recent work in psychology has shown that people who experience mental health challenges (e.g.: suicidal ideation) are more likely to express their thoughts, emotions, and feelings on social media than seeking for professional help. Distinguishing suicide-related content, such as suicide mentioned in a humorous context, from genuine expressions of suicidal ideation is essential to better understanding context and risk. In this paper, we give a first insight and analysis into the differences between emotion labels annotated by humans and labels predicted by three fine-tuned language models (LMs) for suicide-related content. We find that (i) there is little agreement between LMs for emotion labels of suicide-related Tweets and (ii) individual LMs predict similar emotion labels for all suicide-related categories. Our findings lead us to question the credibility and usefulness of such methods in high-risk scenarios such as suicide ideation detection.
Paper Type: short
Research Area: NLP Applications
Contribution Types: Model analysis & interpretability, Publicly available software and/or pre-trained models, Data analysis
Languages Studied: English
0 Replies

Loading