Abstract: Federated learning (FL) allows multiple participants to collaborate to train a machine learning model while ensuring that the data remain local. This approach has seen extensive application in the Internet of Things (IoT). Compared to traditional centralized training methods, FL indeed protects the raw data, but it is difficult to defend against inference attacks and other data reconstruction methods. To address this issue, existing research has introduced a variety of cryptographic techniques, mainly encompassing secure multi-party Computation (SMC), homomorphic encryption (HE), and differential privacy (DP). However, approaches reliant on HE and SMC do not provide sufficient protection for the model data itself and often lead to significant communication and computation overhead; exclusively employing DP necessitates the incorporation of substantial noise, which harms model performance. In this paper, we propose an efficient and privacy-preserving dual-key black-box aggregation method that uses Paillier threshold homomorphic encryption (TPHE), which ensures the protection of the model parameters during the transmission and aggregation phases via a two-step decryption process. To defend various data reconstruction attacks, we also achieve a node-level DP to effectively eliminate the possibility of recovering raw data from the aggregated parameters. Through experiments on MNIST, CIFAR-10, and SVHN, we have shown that our method has up to a 11% smaller reduction in model accuracy compared to other schemes. Furthermore, compared to SMC-based FL schemes, our scheme significantly reduces communication overhead from 60% to 80%, depending on the number of participating nodes. We also conduct comparative experiments on the defense against GAN attacks and membership inference attacks, proving that our method provides effective protection for data privacy.