Abstract: Image segmentation is an important part of many biomedical research and clinical pipelines. Because images within a dataset are often similar in appearance and composition, structures in one image can contain information that is useful for segmenting other images. However, existing image segmentation models segment each input image independently, limiting their ability to share this information.
We present InterConv, a mechanism that enables segmentation models to interact and share information across a set of structurally related images. InterConv is a layer that can be inserted within any network to facilitate set interaction with intermediate sample features without changing the fundamental network architecture, and therefore can be integrated into most existing segmentation models. We demonstrate the effectiveness of InterConv by applying it to two state-of-the-art image segmentation architectures: UNets and Vision Transformers, and tackle challenging tasks in both automatic and InterConv biomedical image segmentation. By learning to interact samples through aggregated set features, InterConv consistently improves per-sample segmentation performance, sometimes by up to 19%.
Loading