Federated Smoothing Proximal Gradient for Quantile Regression With Non-Convex Penalties

Published: 01 Jan 2025, Last Modified: 05 Aug 2025IEEE Trans. Signal Inf. Process. over Networks 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The rise of internet-of-things (IoT) systems has led to the generation of vast and high-dimensional data across distributed edge devices, often requiring sparse modeling techniques to manage model complexity efficiently. In these environments, quantile regression offers a robust alternative to mean-based models by capturing conditional distributional behavior, which is particularly useful under heavy-tailed noise or heterogeneous data. However, penalized quantile regression in federated learning (FL) remains challenging due to the non-smooth nature of the quantile loss and the non-convex, non-smooth penalties such as MCP and SCAD used for sparsity. To address this gap, we propose the Federated Smoothing Proximal Gradient (FSPG) algorithm, which integrates a smoothing technique into the proximal gradient framework to enable effective, stable, and theoretically guaranteed optimization in decentralized settings. FSPG guarantees monotonic reduction in the objective function and achieves faster convergence than existing methods. We further extend FSPG to handle partial client participation (PCP-FSPG), making the algorithm robust to intermittent node availability by adaptively updating local parameters based on client activity. Extensive experiments validate that FSPG and PCP-FSPG achieve superior accuracy, convergence behavior, and variable selection performance compared to existing baselines, demonstrating their practical utility in real-world federated applications.
Loading