MamlFormer: Priori-experience guiding transformer network via manifold adversarial multi-modal learning for laryngeal histopathological grading
Abstract: Highlights•We propose MamlFormer for effective multimodal fusion of LSCC high and low magnification image models.•The manifold block eliminates the redundancy of feature information caused by the background semantics. Eventually, it improves the consistency of LSCC high and low magnification image modalities in the multimodal model.•The adversarial block achieves adaptive learning of a latent metric for the modal distribution of LSCC high- and low-magnification images. Hereby, it enhances the complementarity of the LSCC high- and low-magnification image modalities.
Loading