Abstract: Highlights•Parameter efficient prompt tuning of a frozen model is demonstrated for segmentation.•A deeply prompt-able UNETR (PUNETR) with token-dependent predictions is presented.•A robust contrastive pre-training scheme for dense self-supervision is introduced.•Concurrent self- & semi-supervised pre-training improves the downstream performance.
Loading