Keywords: Semantic Alignment, Latent Space Structuring, Multi-Objective Learning
TL;DR: LARESA is a lightweight loss framework that aligns latent spaces with label semantics, improving accuracy, efficiency, and interpretability across models.
Abstract: Existing solutions for improving the semantic coherence of latent representations from deep classification models often require architectural modifications or pre-training stages. We introduce LARESA, a dynamic, multi-objective, light-weight and regularization-driven training framework that injects semantic priors via auxiliary loss terms into the latent space using only class label text, requiring no architectural changes. LARESA leverages relational distances between language embeddings of the class labels or descriptions, fostering robust and interpretable latent spaces. Our method jointly optimizes traditional classification with semantic alignment and cluster-oriented regularizers with a learnable loss weighting mechanism to encourage both meaningful and well-separated feature representations. Across our experiments, LARESA delivers substantial accuracy improvements while simultaneously enhancing latent space disentanglement. Notably, language embeddings require only a one-time pre-processing step with minimal overhead, even for high-class scenarios, therefore our regularization term introduces negligible computational cost during training, enabling seamless application to existing classification models.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 11792
Loading