Are Humans Biased in Assessment of Video Interviews?Open Website

2019 (modified: 08 Nov 2021)ICMI (Adjunct) 2019Readers: Everyone
Abstract: Supervised systems require human labels for training. But, are humans themselves always impartial during the annotation process? We examine this question in the context of automated assessment of human behavioral tasks. Specifically, we investigate whether human ratings themselves can be trusted at their face value when scoring video-based structured interviews, and whether such ratings can impact machine learning models that use them as training data. We present preliminary empirical evidence that indicates there are biases in such annotations, most of which are visual in nature.
0 Replies

Loading