Abstract: Existing studies tend to focus on model modifications and integration with higher accuracy, which improve performance but also carry huge computational costs, resulting in longer detection times. In medical imaging, the use of time is extremely sensitive. And at present most of the semantic segmentation models have encoder-decoder structure or double branch structure. Their several times of the pooling use with high-level semantic information extraction operation cause information loss although there is a reverse pooling or other similar action to restore information loss of pooling operation. In addition, we notice that visual attention mechanism has superior performance on a variety of tasks. Given this, this paper proposes non-pooling network (NPNet), non-pooling commendably reduces the loss of information and attention enhancement module (AEM) effectively increases the weight of useful information. The method greatly reduces the number of parameters and computation costs by the shallow neural network structure. We evaluate the semantic segmentation model of our NPNet on three benchmark datasets comparing with multiple current state-of-the-art (SOTA) models, and the implementation results show that our NPNet achieves SOTA performance, with an excellent balance between accuracy and speed.
Loading