Keywords: Cyclic Learning, Self-supervised Learning
Abstract: Cyclic learning has emerged as a powerful paradigm for weakly-supervised learning. It involves training with pairs of inverse tasks and leverages cycle-consistency in the design of loss functions. However, its potential remains underexplored, as current methods are often narrowly focused on domain-specific implementations.
In this work, we develop generalized solutions for both pairwise cycle-consistent tasks and self-cycle-consistent tasks. By formulating cross-domain mappings as conditional probability functions, we reformulate the cycle-consistency objective as an evidence lower bound optimization problem via variational inference. Based on this formulation, we further propose two training strategies for arbitrary cyclic learning tasks: single-step optimization and alternating optimization.
Our framework demonstrates broad applicability across diverse tasks. In unpaired image translation, it offers a theoretical justification for CycleGAN and yields CycleGN—a competitive GAN-free alternative.
In unsupervised tracking, following our conceptual design, CycleTrack and CycleTrack-EM achieve state-of-the-art results on multiple benchmarks.
This work establishes the theoretical foundations of cyclic learning and offers a general paradigm for future research.
The source codes for CycleGN and CycleTrack are publicly available.
Primary Area: learning theory
Submission Number: 19614
Loading