Keywords: paired images, representation learning, supervised contrastive learning
Abstract: Classification with comparative paired inputs, such as pre- and post-disaster satellite images, distinguishes classes of samples by encompassing dual feature sets that individually characterize a sample. Representation learning from comparative nature of the inputs calls for not only recognizing invariant patterns shared across all inputs but also effectively differentiating the contrastive attributes present between each pair of inputs. Supervised Contrastive Learning (SCL) aims to learn representation that maximally separates different classes and condenses within individual classes, thereby attaining an adversarial equilibrium. However, this equilibrium typically relies on the assumption of balanced data and large batch sizes for sufficient negative sampling. These issues are exacerbated when applied to paired satellite images due to increased computational load, high-resolution data, and severe class imbalance. To address these challenges, we introduce Latent Orthonormal Contrastive Learning (LOCAL), an approach that optimizes class representations in an orthonormal fashion. By learning each class to a unique, orthogonal plane in the embedding space, LOCAL is efficient with smaller batch sizes, provably effective regardless of class size imbalance, and yields more discriminative information between pairs of inputs via a feature correlation module. Experimental results on paired image data demonstrate superior performance of LOCAL over SCL, offering a powerful alternative approach for paired input analysis.
Supplementary Material: pdf
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8200
Loading