Keywords: Spectral Kernel Learning, Machine Learning, Deep Spectral Kernel
Abstract: Deep spectral kernels are constructed by hierarchically stacking explicit spectral kernel mappings derived from the Fourier transform of the spectral density function. This family of kernels unifies the expressive power of hierarchical architectures with the ability of the spectral density in revealing essential patterns within data, helping to understand the underlying mechanisms of models. In this paper, we categorize most existing deep spectral kernel models into four classes based on the stationarity of spectral kernels and the compositional structure of their associated mappings. Building on this taxonomy, we rigorously investigate two questions concerning the general characterization of deep spectral kernels: (1) Does the deep spectral kernel retain the reproducing property during the stacking process? (2) In which class can the reproducing kernel Hilbert space (RKHS) induced by the deep spectral kernel expand with increasing depth? Specifically, the behavior of RKHS is related to its associated spectral density function. This means that we can implement the deep spectral kernel by directly resampling from an adaptive spectral density. These insights motivate us to propose the generative spectral kernel framework, which directly learns the adaptive spectral distribution by generative networks. This method, with the single-layer spectral kernel architecture, can: (1) generate an adaptive spectral density and achieve deep spectral kernel performance; (2) circumvent the optimization challenges introduced by multi-layer stacking. Experimental results on the synthetic data and several real-world time series datasets consistently validate our findings.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 17476
Loading