Towards Escaping from Class Dependency Modeling for Multi-Dimensional Classification

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose DCOM, a novel MDC approach that decouples dimension interactions. It identifies a latent factor to achieve partial conditional independence among class variables. Experiments show DCOM outperforms state-of-the-art approaches..
Abstract: In multi-dimensional classification (MDC), the semantics of objects are characterized by multiple class variables from different dimensions. Existing MDC approaches focus on designing effective class dependency modeling strategies to enhance classification performance. However, the intercoupling of multiple class variables poses a significant challenge to the precise modeling of class dependencies. In this paper, we make the first attempt towards escaping from class dependency modeling for addressing MDC problems. Accordingly, a novel MDC approach named DCOM is proposed by decoupling the interactions of different dimensions in MDC. Specifically, DCOM endeavors to identify a latent factor that encapsulates the most salient and critical feature information. This factor will facilitate partial conditional independence among class variables conditioned on both the original feature vector and the learned latent embedding. Once the conditional independence is established, classification models can be readily induced by employing simple neural networks on each dimension. Extensive experiments conducted on benchmark data sets demonstrate that DCOM outperforms other state-of-the-art MDC approaches.
Lay Summary: In real-world scenarios, objects usually need labels across multiple dimensions. For example, a landscape picture can be labeled from *time* dimension (with possible labels "morning", "afternoon", and "night", etc.), from *weather* dimension (with possible labels "sunny", "rainy", "cloudy", etc.), and from *scene* dimension (with possible labels "desert", "mountain", "grass", etc.). These labels can interact in complex ways --- label "sunny" in *weather* dimension can not occur with label "night" in *time* dimension while label "dessert" in *scene* dimension is often accompanied by "sunny". Such intricate dependencies make Multi-Dimensional Classification (MDC), a task requiring predictions across multiple class spaces particularly challenging. This paper introduces a groundbreaking approach called ​DCOM, which bypasses the need to directly analyze class dependencies. Instead, DCOM learns a ​latent factor to capture the most essential and critical feature information to naturally untangles the chaos caused by interrelated dimensions. This allows simple, independent models to handle each dimension efficiently, like using basic neural networks for separate tasks. As a first attempt towards escaping from class dependency modeling in MDC, DCOM achieves state-of-the-art performance while marking a paradigm shift in handling complex class interactions.
Link To Code: https://github.com/tengingit/DCOM-ICML-25
Primary Area: General Machine Learning->Supervised Learning
Keywords: Multi-Dimensional Classification, Class Dependency Modeling
Submission Number: 8953
Loading