Abstract: Highlights•Proposed a Point cloud One-Stage BERT-style pre-training method.•Using a momentum tokenizer to provide continuous and dynamic supervision signals.•Does not require an extra training step.•Using a contrastive learning to learn better high-level semantic representation.•Achieved the best performance on multiple downstream tasks.
Loading