Abstract: Wikipedia content is produced by a complex socio-technical systems (STS), and exhibits numerous biases, such as gender and cultural biases. We investigate how these biases relate to the concepts of algorithmic bias and fairness defined in the context of algorithmic systems. We systematically review 75 papers describing different types of bias in Wikipedia, which we classify and relate to established notions of harm and normative expectations of fairness as defined for machine learning-driven algorithmic systems. In addition, by analysing causal relationships between the observed phenomena, we demonstrate the complexity of the socio-technical processes causing harm.
External IDs:dblp:conf/collabtech/DamadiD23
Loading