Abstract: Delegation learning is indeed a prevalent approach in privacy-preserving machine learning (PPML), especially when dealing with big data. It specifically involves data owners delegating their data to servers with computational capabilities for training and inference. These servers provide services on a pay-per-use basis. The essence of delegation learning lies in maintaining the integrity of the server’s training while ensuring the privacy of the delegator’s data. However, existing delegation learning schemes struggle to balance security and efficiency, and they cannot guarantee correctness. To tackle these challenges, we propose a cloud-edge collaborative delegation learning framework (CEC-DL) against covert adversaries, which is verifiable and satisfies the guaranteed output delivery (GOD) in security. This is the first time that the covert security assumption is used in a PPML scenario. Furthermore, we design probabilistic verifiable secure addition and subtraction computation protocol (PVS-AaS) and probabilistic verifiable secure multiplication computation protocol (PVS-MUL), which can be used to realize secure addition and multiplication computations in delegation learning without expensive message authentication code (MAC) verification. At the same time, we develop a malicious adversary detection protocol (MADP) that could prevent the malicious actions of potential covert adversaries while ensuring the correct output. Finally, we apply the CEC-DL to the linear regression model to construct privacy preserving linear regression protocol (PP-LRP). Through theoretical analysis and experiments, CEC-DL improves the security, and is more efficient than the verifiable computation of malicious adversaries.
Loading