Abstract: Foundation models pretrained on remote sensing data have shown promise for downstream tasks, yet their behaviour under class imbalance remains underexplored. We benchmark two foundation models, DOFA and SAR-JEPA, against ImageNet-pretrained models on the severely imbalanced OpenSARShip dataset. We apply four feature-space oversampling techniques exclusively to minority classes, scaling them to three times their original size. Our approach achieves up to 8.34\% Macro-F1 and 7.34\% accuracy improvements over baseline foundation models, demonstrating that targeted oversampling enables better balanced performance on SAR ship classification. We provide code to pre-extract embeddings, and reproducible experiments optimised for free-tier Google Colab https://github.com/cm-awais/SARShipfoundationModels.
Submission Number: 6
Loading