Multiple prior representation learning for self-supervised monocular depth estimation via hybrid transformer
Abstract: Highlights•Exploring the complementary cues of multiple priors.•A hybrid transformer and lightweight pose network are employed to capture spatial priors.•Leveraging context priors to perceive complex structures.•Semantic priors are introduced to enhance the object representation.
Loading