Abstract: Previous works condition aging patterns utilizing one-hot or artificial predefined distributions. Nevertheless, different age groups show different intraclass variations. This property made it challenging to express differences in apparent age across all age groups discriminately. Adaptive aging feature distribution by learning the target age group in training data is a promising solution. Unfortunately, existing datasets commonly suffer from diverse degrees of semantic-level attribute imbalance, which leads to the tendency for previous approaches to generate paradoxical appearances. To address the aforementioned issues, we propose a novel framework containing three modules: the Causal Aging (CA) module, the Shapley Value Quantization (SVQ) module, and the Differentiated Age Embedding Transformation (DAT) module. Specifically, to eliminate the effect of attribute imbalance on the adaptive distribution of learning target age groups, we design the CA module, which controls the effect of momentum on aging features by De-confound training. Meanwhile, the influence of the aging-independent attribute, which appears abundantly in training data, on the target aging feature is eliminated by counterfactual inference subtraction. Subsequently, the SVQ module quantifies the contribution of different attributes to age based on the results of the CA module. This operation allows us to obtain adaptive age distributions for different age groups. Eventually, the DAT module takes a target age vector, sampled from the target age distribution quantized by SVQ, and modulates the age representation of the generated image. Extensive experimental results on four face aging datasets show that our model achieves convincing performance compared to the current state-of-the-art methods.
Loading