RelRoll: A Relative Elicitation Mechanism for Scoring Annotation with A Case Study on Speech EmotionDownload PDF

19 Dec 2022 (modified: 21 Jun 2023)GI 2023Readers: Everyone
Keywords: Scoring Annotation, Relative Labeling, Affective Computing, Emotional Speech, Non-Experts, Labeling Interface
TL;DR: We invesitgate RelRoll, a meachnism allowing non-experts to give reliable rating annotations through relative labels.
Abstract: It is challenging for non-experts to give reliable labels in scoring annotation tasks because of the lack of domain knowledge and the high variability. We present RelRoll, a mechanism allowing non-experts to give reliable ratings, with a case study on speech emotion. It includes two main features: a relative labeling interface highlighting emotion-changing sentences and an approach to estimating absolute labels from relative labels. Highlighting can help non-experts focus on sentences needing their actions. Given new sentences, we utilize a network to predict emotion-changing sentences and highlight them on the interface, which is trained on training set labeled by experts’ absolute annotations. Non-experts will give relative labels to new sentences through our interface. We estimate the absolute labels of new sentences by combining the network output and relative labels. We ran a user study to compare the proposed interface to the commonly used absolute interface. Results showed that \textit{RelRoll} could improve the annotation agreement and the absolute label annotation accuracy. We discussed the user experience enhancement on helpfulness, intuitiveness, and easiness.
Track: HCI/visualization
Accompanying Video: zip
Summary Of Changes: 2023/03/09 revision: We checked for and removed contractions. ---------------------------------------------------------- *We revised the issues in the statistical analysis of the questionnaire results. *We added missing related works and clarified the difference with the most related work. *We improved discussion of limitations with the scalability, generalization of the mechanism, by accounting all reviews. *We rewrote parts of the abstract and introduction, to improve the clarity of motivation and gave more complete definitions of non-experts and scoring annotation. *We added the implementation details of the neural network, and rewrote the explanation of the formulas. *We scrutinized and corrected typos and grammar mistakes.
4 Replies

Loading