Abstract: Large employers are increasingly turning to sentiment analysis technologies as a form of corporate worker surveillance. Previous work has identified numerous problems with this trend, arguing that these technologies may inherit racial or gender biases from historical data, that the expansion of the surveillance apparatus itself threatens privacy and limits freedom of expression, and that the often proprietary algorithms may be inaccurate. This paper contributes to this conversation with a critical analysis of “sentiment” as it is operationalized by these technologies. As we argue, even on its own terms, sentiment monitoring software fails to capture the objective view of worker sentiment it imagines exists. Taking as a case study the sentiment analysis software platform Aware, we argue that these systems construct, through rhetoric and statistics, categories such as employee sentiment and toxic language that are divorced from the realities that these systems ultimately affect. In particular, we draw on Aware’s own public reports documenting its technology and experiments to reconstruct an analog of Aware’s sentiment analysis technology for study. We then compare the categories of “sentiment” and “toxicity” as they appear in the sentiment analysis literature, in Aware’s rhetoric, and in the analyzed dataset. Through this combination of technical and critical analyses, we place contemporary sentiment analysis technologies within a troubled historical context of workplace sentiment surveillance and illustrate a method of analyzing contemporary algorithmic technologies in terms of their underlying critical concepts that may have applications beyond critiques of workplace sentiment analysis.
Loading