Keywords: distribution regression, kernel methods, Reproducing Kernel Hilbert Spaces (RKHS), kernel mean embeddings, data-dependent kernel, unsupervised learning
TL;DR: The paper introduces a novel unsupervised objective for learning data-dependent distribution kernel based on the principal of entropy maximization in measure embeddings space.
Abstract: Empirical data can often be considered as samples from a set of probability distributions. Kernel methods have emerged as a natural approach for learning to classify these distributions. Although numerous kernels between distributions have been proposed, applying kernel methods to distribution regression tasks remains challenging, primarily because selecting a suitable kernel is not straightforward. Surprisingly, the question of learning a data-dependent distribution kernel has received little attention. In this paper, we propose a novel objective for the unsupervised learning of data-dependent distribution kernel, based on the principle of entropy maximization in the space of probability measure embeddings. We examine the theoretical properties of the latent embedding space induced by our objective, demonstrating that its geometric structure is well-suited for solving downstream discriminative tasks. Finally, we demonstrate the performance of the learned kernel across different modalities.
Primary Area: Other (please use sparingly, only use the keyword field for more details)
Submission Number: 12514
Loading