Abstract: Effectively training differentially private models in machine learning requires optimizing the hyper-parameters while ensuring privacy and maintaining accuracy. This research addresses this challenge by analyzing hyper-parameter tuning results and employs the Pareto frontier approach to identify optimal trade-offs and architectures for private learning. The findings enhance understanding of privacy considerations and inform the development of effective training methodologies and the decision-making process for practical applications.
Loading