A benchmark of categorical encoders for binary classification

Published: 26 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 Datasets and Benchmarks PosterEveryoneRevisionsBibTeX
Keywords: categorical data, encoder, benchmark, sensitivity analysis, replicability, generalizability, ranking
TL;DR: Our study shows encoder performance is highly sensitive to factors like ML models, metrics, and tuning, explaining the disagreement between related studies. We study the replicability of our results and provide recommendations for encoders
Abstract: Categorical encoders transform categorical features into numerical representations that are indispensable for a wide range of machine learning models. Existing encoder benchmark studies lack generalizability because of their limited choice of (1) encoders, (2) experimental factors, and (3) datasets. Additionally, inconsistencies arise from the adoption of varying aggregation strategies. This paper is the most comprehensive benchmark of categorical encoders to date, including an extensive evaluation of 32 configurations of encoders from diverse families, with 36 combinations of experimental factors, and on 50 datasets. The study shows the profound influence of dataset selection, experimental factors, and aggregation strategies on the benchmark's conclusions~---~aspects disregarded in previous encoder benchmarks. Our code is available at \url{https://github.com/DrCohomology/EncoderBenchmarking}.
Supplementary Material: pdf
Submission Number: 320
Loading