Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: multi-label learning, consistency, surrogate-free optimisation
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Motivated by the inconsistencies of using surrogate loss functions in multi-label learning, we propose a theoretically Bayes consistent Lebesgue measure-based multi-label learner that can achieve state-of-the-art performance.
Abstract: Multi-label loss functions are usually either non-convex or discontinuous, which is practically challenging or impossible to optimise directly. Instead, surrogate loss functions can quantify and approximate the quality of a predicted label set. However, their consistency with the desired loss functions is not proven. This issue is further exacerbated by the conflicting nature of multi-label loss functions. To learn from multiple related, yet potentially conflicting multi-label loss functions using a unified representation of a model, we propose a Consistent Lebesgue Measure-based Multi-label Learner (CLML). We begin by proving that the optimisation of the Lebesgue measure directly corresponds to the optimisation of multiple multi-label losses, i.e., CLML can achieve theoretical consistency under a Bayes risk framework. Empirical evidence supports our theory by demonstrating that: (1) CLML can consistently achieve a better rank than state-of-the-art methods on a wide range of loss functions and datasets; (2) the primary factor contributing to this performance improvement is the Lebesgue measure design, as CLML optimises a simpler feedforward model without additional label graph or semantic embeddings; and (3) an analysis of the results not only distinguishes CLML's effectiveness but also highlights inconsistencies between the surrogate and the desired loss functions.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2175
Loading