Abstract: Highlights•The proposed SHRA Module explore the high-order similarity among nodes via hypergraph learning.•A structure-sharing hypergraph convolution (SHGCN) is performed to reason the attention coefficients.•The hypergraphs are combined in a right-shifted-permutation sequence of hypergraphs.•SHRA Module outperforms classic attention mechanism for CNNs in three datasets.
Loading