Functional Classification Under Local Differential Privacy with Model Reversal and Model Average

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Federated learning, Functional classification, Local differential privacy, Model average, Model reversal
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We use the properties of functional data for dimensionality reduction in modeling, and allocate more clients for validation than for training to improve the evaluation of weak classifiers, leading to an enhanced classifier based on model averaging.
Abstract: Local differential privacy (LDP) has been a focal point in data privacy research, yet its application in the field of functional data classification remains underexplored. To address this gap, we present a novel approach that tackles the challenge of infinite dimensionality in functional classification under LDP constraints. The main idea is to leverage the inherent property of functional data---which allows it to be approximated by a linear combination of basis functions---to reduce the dimensionality of data and facilitate the process of model training under LDP constraints. Specifically, we propose algorithms for constructing functional classifiers designed for both single-server and heterogeneous multi-server environments under LDP. In single-server scenarios, we introduce an innovative allocation strategy where fewer samples are used for training multiple weak classifiers, while the majority are used to evaluate their performance. This enables the construction of a robust classifier with enhanced performance by model averaging. We also introduce a novel technique, ``model reversal", which effectively enhances the performance of weak classifiers. In multi-server contexts, we employ federated learning and enable each server to benefit from shared knowledge to improve the performance of each server's classifier. Experimental results demonstrate that our algorithms significantly boost the performance of functional classifiers under LDP.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3366
Loading