Keywords: fair machine learning, mixed-integer optimization, support vector machine
Abstract: Machine learning classifiers are increasingly deployed in high-stakes domains such as credit scoring, hiring, and criminal justice, where concerns about algorithmic bias have become central. Existing approaches to algorithmic fairness often rely either on removing sensitive features (``fairness through unawareness'') or on post-hoc corrections, which limit transparency and flexibility. We propose a mixed–integer programming framework, ${FairSVM}$, that embeds fairness constraints directly into the training of soft–margin Support Vector Machines. Our formulation expresses multiple group fairness notions --- including statistical parity, predictive equality, equal opportunity, equalized odds, and conditional statistical parity --- as linear constraints within the SVM model. Our approach enables explicit control over the trade-off between predictive accuracy and fairness through adjustable parameters. We evaluate ${FairSVM}$ on three widely studied datasets (German Credit, Adult Income, COMPAS) and compare it against both optimization-based (${ FairOCT}$) and model-agnostic (${CR}$, ${ExpG}$, ${RTO}$) benchmarks. Results show that ${FairSVM}$ substantially reduces group disparities with only limited loss in accuracy, while offering greater flexibility in navigating fairness–accuracy trade-offs. These findings highlight the potential of optimization-based formulations as a foundation for developing next-generation fairness-aware machine learning models.
Submission Number: 63
Loading