Abstract: Machine learning models play an important role for decision-making systems in areas such as hiring, insurance, and predictive policing. However, it still remains a challenge to guarantee their trustworthiness. Fairness is one of the most critical properties of these machine learning models, while individual discriminatory cases may break the trustworthiness of these systems severely. In this paper, we present a systematic approach of testing the fairness of a machine learning model, with individual discriminatory inputs generated automatically in an adaptive manner based on the state-of-the-art deep reinforcement learning techniques. Our approach can explore and exploit the input space efficiently, and find more individual discriminatory inputs within less time consumption. Case studies with typical benchmark models demonstrate the effectiveness and efficiency of our approach, compared to the state-of-the-art black-box fairness testing approaches.
0 Replies
Loading