Abstract: Advances in self-supervised learning have drawn attention to developing techniques to extract effective visual representations from unlabeled images. Contrastive learning (CL) trains a model to extract consistent features by generating different views. Recent success of Masked Autoen- coders (MAE) highlights the benefit of generative modeling in self-supervised learning. The generative approaches encode the input into a compact embedding and empower the model’s ability of recovering the original input. However, in our experiments, we found vanilla MAE mainly recovers coarse high level semantic information and is inadequate in recovering detailed low level information. We show that in dense downstream prediction tasks like multi-organ seg- mentation, directly applying MAE is not ideal. Here, we propose RepRec, a hybrid visual representation learning framework for self-supervised pre-training on large-scale unlabelled medical datasets, which takes advantage of both contrastive and generative modeling. To solve the afore-mentioned dilemma that MAE encounters, a convolutional encoder is pre-trained to provide low-level feature information, in a contrastive way; and a transformer encoder is pre-trained to produce high level semantic dependency, in a generative way – by recovering masked representations from the convolutional encoder. Extensive experiments on three multi-organ segmentation datasets demonstrate that our method outperforms current state-of-the-art methods.
0 Replies
Loading