Abstract: Deep neural networks have become ubiquitous in applications throughout the tech industry, underlying everything from facial recognition to automated translation tools. Many examples have demonstrated the potential ethical pitfalls of their application, and analyses of such examples often cite two major causes: biased training data sets and lack of diversity in the institutions that produce deep learning systems. In this work, we examine how inherent qualities of deep learning itself can give rise to its misuse, using the framework outlined in Winner’s Do Artifacts Have Politics?. First, we argue that the design paradigm advocated by the deep learning revolution, namely the shift to "end-to-end" systems, has opened the door to ignorance of the sensitive, context-specific qualities of some input data. Second, we assert that the reliance of deep learning on increasingly large data sets and compute resources centers the power of these algorithms in corporations or the government, which thus leaves its practice vulnerable to the institutional racism and sexism that is so often found there.
0 Replies
Loading