Abstract: The datasets collected by people are always just a sampling of the real world. In this letter, we explore the possibility of achieving high-quality domain adaptation (DA) without explicit adaptation. As a baseline, we implemented the significantly improved second-generation version of TasselLFANet, TasselLFANetV2. This model, with indicators reaching AP50 of 0.981 and ${R} {^{{2}}}$ of 0.9684, demonstrates leading performance in two typical cross-domain settings of data distribution scenarios, agriculture, and remote sensing (RS), exhibiting strong domain adaptation and generalization, surpassing advanced methods such as YOLOv8-UAV, PlantBiCNet, SLA, etc. We further studied the combination of regularization techniques, and feature re-mapping modules can effectively alleviate the domain invariance of the model. What is more, when the training set and validation set are set the same, the training performance of the model is better, but the premise is that there must be a proper data transformation strategy. This work provides a new perspective for understanding and solving the problem of domain difference in deep learning. The code and datasets can be accessed at https://github.com/Ye-Sk/TasselLFANetV2.
External IDs:doi:10.1109/lgrs.2024.3382871
Loading