Abstract: The emergence of new machine learning methods has led to their widespread application across various domains, significantly advancing the field of artificial intelligence. However, the process of training and inferring machine learning models relies on vast amounts of data, which often include sensitive private information. Consequently, the privacy and security of machine learning have encountered significant challenges. Several studies have demonstrated the vulnerability of machine learning to privacy inference attacks, but they often focus on specific scenarios, leaving a gap in understanding the broader picture. We provide a comprehensive review of privacy attacks in machine learning, focusing on two scenarios: centralized learning and federated learning. This article begins by presenting the architectures of both centralized learning and federated learning, along with their respective application scenarios. It then conducts a comprehensive review and categorization of related inference attacks, providing a detailed analysis of the different stages involved in these attacks. Moreover, the article thoroughly describes and compares the existing defense methods. Finally, the article concludes by highlighting open questions and potential future research directions, aiming to contribute to the ongoing competition between privacy attackers and defenders.
Loading