Keywords: Deep network optimization, natural selection, sample weighting, image classification, emotion recognition
Abstract: In conventional deep learning training paradigms, all samples are usually subjected to uniform selective pressure, which fails to adequately account for variations in competitive intensity and diversity among them. This often leads to challenges such as class imbalance bias, insufficient learning of hard samples, and improper handling of noisy samples. Drawing inspiration from the principles of species competition and adaptation in natural ecosystems, we propose a bio-inspired optimization method for deep networks, termed Natural Selection (NS). NS introduces a competition mechanism by first assembling a group of samples into a composite image and then downscaling it to the original input size for model inference. Each sample is then assigned a natural selection score based on the model's predictions on this composite image, reflecting its competitive status within the group. This score is further used to dynamically adjust the loss weight of each sample, facilitating an adaptive network optimization process driven by competitive interactions among training samples. Experimental results on 12 public datasets consistently demonstrate that NS improves performance without being tied to specific network architectures or task assumptions. This study offers a novel perspective on deep network optimization and holds instructive significance for broader applications. The code will be made publicly accessible.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 5674
Loading