Temporally Equivariant Contrastive Learning for Disease Progression

20 Sept 2023 (modified: 22 Feb 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Disease progression, contrastive learning, representation learning, equivariance, temporal task, medical imaging
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We developed an equivariant contrastive model for degenerative retinal disease progression in temporal datasets.
Abstract: Self-supervised contrastive learning methods provide robust representations by ensuring their invariance to different image transformations while simultaneously preventing representational collapse across different training samples. Equivariant contrastive learning, on the other hand, provides representations sensitive to specific image transformations while remaining invariant to others. By introducing equivariance to time-induced transformations, such as the anatomical changes in longitudinal medical images of a patient caused by disease progression, the model can effectively capture such changes in the representation space. However, learning temporally meaningful representations is challenging, as each patient's disease progresses at a different pace and manifests itself as different anatomical changes. In this work, we propose a Time-equivariant Contrastive Learning (TC) method. First, an encoder projects two unlabeled scans from different time points of the same patient to the representation space. Next, a temporal equivariance module is trained to predict the representation of a later visit based on the representation from one of the previous visits and from the time interval between them. Additionally, an invariance loss is applied to a projection of the representation space to encourage it to be robust to irrelevant image transformations such as translation, rotation, and noise. The representations learned with TC are not only sensitive to the progression of time but the temporal equivariant module can also be used to predict the representation for a given image at a future time-point. Our method has been evaluated on two longitudinal ophthalmic imaging datasets outperforming other state-of-the-art equivariant contrastive learning methods. Our method also showed a higher sensitivity to temporal ordering among the scans of each patient in comparison with the existing methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2460
Loading