Abstract: Highlights•Intermediate attention mechanism and decision module to generate the final decision graph.•Reinforcing the learning ability of local semantic information.•Achieving a balance between PATCH self-attention and global self-attention.•The shallow feature attention module is used to extract the initial global self-attentive feature map to compensate for Transformer’s poor primary semantic acquisition capability.
Loading