Keywords: Reproducibility, Trustworthiness, Determinism, Randomness
Abstract: Deep learning models are often evaluated under the assumption that setting random seeds ensures reproducibility and fairness. While repeating the same seed yields identical results, this form of reproducibility does not capture the variability that arises when different seeds are used. Such seed-dependent variation undermines the robustness and trustworthiness of reported results. We introduce Variance Minimizer Loss (VML), an adaptive, volatility-aware penalty that reduces stochastic fluctuation within a single training run. VML is architecture-agnostic and integrates as a drop-in replacement for the standard objective. On CIFAR-10/100 across four architectures, VML reduces across-seed accuracy standard deviation by 33–75% while keeping mean accuracy essentially unchanged. Crucially, VML achieves these gains without extra computational cost.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 24299
Loading