Abstract: Self-supervised learning has shown its great potential to
extract powerful visual representations without human annotations. Various works are proposed to deal with selfsupervised learning from different perspectives: (1) contrastive learning methods (e.g., MoCo, SimCLR) utilize both
positive and negative samples to guide the training direction; (2) asymmetric network methods (e.g., BYOL, SimSiam) get rid of negative samples via the introduction of a
predictor network and the stop-gradient operation; (3) feature decorrelation methods (e.g., Barlow Twins, VICReg)
instead aim to reduce the redundancy between feature dimensions. These methods appear to be quite different in the
designed loss functions from various motivations. The final accuracy numbers also vary, where different networks
and tricks are utilized in different works. In this work, we
demonstrate that these methods can be unified into the same
form. Instead of comparing their loss functions, we derive
a unified formula through gradient analysis. Furthermore,
we conduct fair and detailed experiments to compare their
performances. It turns out that there is little gap between
these methods, and the use of momentum encoder is the key
factor to boost performance.
From this unified framework, we propose UniGrad, a
simple but effective gradient form for self-supervised learning. It does not require a memory bank or a predictor
network, but can still achieve state-of-the-art performance
and easily adopt other training strategies. Extensive experiments on linear evaluation and many downstream tasks also
show its effectiveness. Code shall be released.
Loading