Abstract: We propose the multi-level network Lasso, which aims to overcome the key limitations of existing personalized learning methods, such as ignoring sample homogeneity or heterogeneity, and over-parametrization. Multi-level network Lasso learns both sample-common model and sample-specific model, that are succinct and interpretable in the sense that model parameters are shared across neighboring samples based on only a subset of relevant features. To apply personalized learning in multi-task scenarios, we further extend the multi-level network Lasso for multi-task personalized learning by learning underlying task groups in the feature subspace. Additionally, we investigate a family of the multi-level network Lasso based on the $\ell_p$ quasi-norm ($0<p<1$), that helps prevent over-penalization on large group outliers. An alternating algorithm is developed to efficiently solve the proposed optimization problem. Experimental results on synthetic and real-world datasets demonstrate the effectiveness of the proposed method.
Loading