Abstract: Before the execution of many standard graph signal processing (GSP) modules, such as compression and restoration, learning of a graph that encodes pairwise (dis)similarities in data is an important precursor. In data-starved scenarios, to reduce parameterization, previous graph learning algorithms make assumptions in the nodal domain on i) graph connectivity (e.g., edge sparsity), and/or ii) edge weights (e.g., positive edges only). In this paper, given an empirical covariance matrix $\bar{{\mathbf{C}}}$ estimated from sparse data, we consider instead a spectral-domain assumption on the graph Laplacian matrix ${\mathcal{L}}$: the first $K$ eigenvectors (called “core” eigenvectors) $\{{\mathbf{u}}_{k}\}$ of ${\mathcal{L}}$ are pre-selected—e.g., based on domain-specific knowledge—and only the remaining eigenvectors are learned and parameterized. We first prove that, inside a Hilbert space of real symmetric matrices, the subspace ${\mathcal{H}}_{\mathbf{u}}^{+}$ of positive semi-definite (PSD) matrices sharing a common set of core $K$ eigenvectors $\{{\mathbf{u}}_{k}\}$ is a convex cone. Inspired by the Gram-Schmidt procedure, we then construct an efficient operator to project a given positive definite (PD) matrix onto ${\mathcal{H}}_{\mathbf{u}}^{+}$. Finally, we design a hybrid graphical lasso/projection algorithm to compute a locally optimal inverse Laplacian ${\mathcal{L}}^{-1}\in{\mathcal{H}}_{\mathbf{u}}^{+}$ given $\bar{{\mathbf{C}}}$. We apply our graph learning algorithm in two practical settings: parliamentary voting interpolation and predictive transform coding in image compression. Experiments show that our algorithm outperformed existing graph learning schemes in data-starved scenarios for both synthetic data and these two settings.
Loading