Keywords: Domain Generalization, Stability Bound, Learning Method, Regularization
Abstract: The less-sample learning problem is challenging for machine learning, as it leads to unstable model estimation, i.e., the risk gap between the empirical risk and the expected risk for models increases as the size of the training data decreases. To address this, the classical VC-bound suggests reducing the VC-dimension of models through regularization. However, the data in domain generalization are not independent and identically distributed (i.i.d.), which implies that such bounds fail to provide effective guidance for learning. To fill this gap, we present stability bounds. Specifically, we derive a general exponential-decay upper bound based on the notion of stability for models and McDiarmid’s inequality. Based on this, we then present stability bounds for models obtained by regularization-based learning methods. Finally, we apply this result to a classification case and develop a learning method. We also study the stability and generalization error bounds of the proposed learning method, as well as its convergence properties. Additionally, we conduct experiments using datasets with different data sizes to analyze the effectiveness of our methods in real-world applications.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 8657
Loading