Abstract: Representation learning is an important part of any machine learning model. Learning privacy-preserving discriminative representations that are invariant against nuisance factors is an open question. This is done by removing sensitive information from the learned representation. Such privacy-preserving representations are believed to be beneficial to some medical and federated learning applications. In this paper, a framework for learning invariant fair representations by decomposing the learned representation into target and sensitive codes is proposed. An entropy maximization constraint is imposed on the target code to be invariant to sensitive information. The proposed model is evaluated on three applications derived from two medical datasets for autism detection and healthcare insurance. We compare with two methods and achieve state of the art performance in sensitive information leakage trade-off. A discussion regarding the difficulties of applying fair representation learning to medical data and when it is desirable is presented.
0 Replies
Loading