Abstract: In this study, we proposed a cascade structure of dynamic graph convolutional and capsule networks for accurate decoding of motor imagery (MI) based brain-computer interfaces (BCIs) with both electroencephalogram signals and functional near-infrared spectroscopy (fNIRS) signals. The same network structure with different parameter settings was applied to these two modalities to extract features through temporal convolution block, dynamic graph convolution block, and capsule generation block. The temporal convolution block was used to learn temporal features, the dynamic graph convolution block to learn spatial features, and the capsule generation block to generate primary capsules. Then the capsuled features will undergo cross-attention and then go through a feature fusion block and a dynamic routing block which is an iterative algorithm designed to learn the connection weights between primary capsules and digit capsules. The mean accuracy of leave-one-session-out testing can reach 92.60 %±4.49 % and 92.20 %±2.95 % for self-collected EEG-fNIRS data (dataset A) and publicly available dataset (dataset B) whereas the accuracy of randomized five-fold cross-validation testing for another publicly available dataset (dataset C) is 85.30 %±3.58 %. Moreover, the leave-one-subject-out testing shows that the proposed method is superior to that of the current state-of-the-art methods, like hybrid EEGNet, hybrid LSTM, or hybrid CapsNet at least 4 % across all three datasets. These results demonstrate that the proposed network structure can be a good candidate for the decoding of MI-based BCIs with multiple modalities.