Robust support vector machine based on sample screening

12 Aug 2024 (modified: 11 Oct 2024)IEEE ICIST 2024 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: The paper proposes a novel robust SVM framework to enhance classifier resilience against contaminated data without sacrificing accuracy.
Abstract: Support Vector Machine (SVM) is a prevalent classifier within machine learning, yet its robustness is compromised by the presence of contaminated samples. Such samples, often encountered in practical scenarios, represent deviations from the expected data distribution and can include irrelevant or adversarial instances. To enhance SVM's resilience, Fuzzy SVM (FSVM) was introduced, leveraging sample weights to mitigate the impact of outliers. However, FSVM has been criticized for its tendency to sacrifice accuracy, leading to inconsistent performance gains. To address this issue, we introduce a novel robust SVM framework designed to counteract the effects of adversarial samples during training. Our approach involves dynamically setting the weights of samples with substantial loss values to zero, thereby diminishing the influence of outliers. It can be viewed as incorporating sample screening during the training process, thus decreasing the training time. This modification is particularly effective in scenarios where training data may be tainted with labels that are intentionally misleading. The experimental findings demonstrate that this strategy significantly enhances the classifier's robustness against contaminated data, without compromising accuracy. This robust SVM presents a promising solution for improving the reliability of SVMs in real-world applications, where data integrity can be a critical concern.
Submission Number: 96
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview