Facing Annotation Redundancy: OCT Layer Segmentation with only 10 Annotated Pixels per LayerOpen Website

Yanyu Xu, Xinxing Xu, Huazhu Fu, Meng Wang, Rick Siow Mong Goh, Yong Liu

2022 (modified: 18 Nov 2022)REMIA@MICCAI 2022Readers: Everyone
Abstract: The retinal layer segmentation from OCT images is a fundamental and important task in the diagnosis and monitoring of eye-related diseases. The quest for improved accuracy is driving the use of increasingly large dataset with fully pixel-level layer annotations. But the manual annotation process is expensive and tedious, further, the annotators also need sufficient medical knowledge which brings a great burden on the doctors. We observe that there exist a large number of repetitive texture patterns in the flatten OCT images. More surprisingly, by significantly reducing the annotation from 100% to 10%, even to 1%, the performance of a segmentation model only drops a little, i.e., error from $$2.53\, \upmu \text {m}$$ to $$2.76\,\upmu \text {m}$$ , and to $$3.27\,\upmu \text {m}$$ on a validation set, respectively. Such observation motivates us to deeply investigate the redundancies of the annotation in the feature space which would definitely facilitate the annotation for medical images. To greatly reduce the expensive annotation costs, we propose a new annotation-efficient learning paradigm by annotating a fixed and limited number of pixels for each layer in each image. Considering the redundancies in the repetitive patterns in each layer of OCT images, we employ a VQ memory bank to store the extracted features on the whole datasets to augment the visual representation. The experimental results on two public datasets validate the effectiveness of our model. With only 10 annotated pixels for each layer in an image, our performance is very close to the previous methods trained with the whole fully annotated dataset.
0 Replies

Loading