Abstract: A major challenge in responsible Machine Learning (ML) engineering is ensuring fairness across multiple protected attributes and their intersections. Existing bias mitigation techniques and Automated Machine Learning (AutoML) systems often fail to address this due to the combinatorial explosion of configurations during hyperparameter optimization (HPO). We propose \textsc{FairSpace}, a fairness-aware framework that jointly performs HPO and dataset-specific feature engineering while strategically pruning the configuration space. \textsc{FairSpace} integrates LLM-assisted feature engineering methods with a bi-objective cost function to balance fairness and accuracy. Experimental results on five widely-used datasets demonstrate that \textsc{FairSpace} achieves win–win outcomes—simultaneously improving fairness and accuracy for 63\% of the cases, outperforming state-of-the-art (SOTA) baselines that achieve up to 60\%. Moreover, \textsc{FairSpace} achieves these results with approximately 25\% less computation time, owing to its targeted pruning strategy as compared to the SOTA AutoML baseline such as FairAutoML. By explicitly tackling intersectional fairness, \textsc{FairSpace} reaches 94\% of its outcomes in the \emph{win–win} and \emph{good trade-off} regions, providing a consistent and generalizable foundation for fairness-aware AutoML.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Kuldeep_S._Meel2
Submission Number: 6645
Loading