Abstract: Highlights•We propose a novel graph neural network method, which integrates a modular prior by attention architectures to model interpretable FBGs.•We design a modular attention mechanism that integrates SVD to capture modular priors.•Experiments results on two benchmark databases show that our proposed method generally achieves higher classification accuracy.
Loading