Keywords: Tensor Unsupervised Feature Selection; Multi-Linear Subspace Learning; Multi-Linear Subspace Distance
TL;DR: We introduce Multi-Linear Subspace Learning Feature Selection (MSLFS), the first feature selection method using multi-linear subspace distance to preserve global tensor structure via mode-specific selection.
Abstract: Feature selection in tensor data poses greater challenges than in vector representations, since it must capture correlations spanning multiple modes rather than treating each mode in isolation. Existing tensor-based methods partially address this but often treat the feature space as a whole, selecting features globally without respecting mode-specific dependencies. This not only overlooks cross-mode interactions but also increases computational burden, as all features must be considered at once. Moreover, they lack a principled criterion for preserving the global structure of the original tensor. In this work, we introduce Multi-Linear Subspace Learning Feature Selection (MSLFS), a framework that overcomes these limitations by distributing feature selection across modes. Specifically, MSLFS selects a small number of representative slices along each mode, whose intersections yield the most informative features. The core innovation is a multi-linear subspace distance, which provides a principled measure of how well these selected features preserve the global multi-way structure of the data, while significantly reducing redundancy and computational cost. This objective is complemented by two novel regularizations: a joint sparsity constraint that enforces coordinated sparsity across modes to identify compact, non-redundant features, and a higher-order graph constraint that preserves local manifold geometry within the induced subtensor. Taken together, these components guarantee that the overall tensor structure as well as the local neighborhood relationships are preserved. Comprehensive experiments on image recognition and biomedical benchmarks demonstrate that MSLFS consistently surpasses state-of-the-art feature selection techniques in clustering tasks.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 17686
Loading