BERT-Sort: A Zero-shot MLM Semantic Encoder on Ordinal Features for AutoMLDownload PDF

Published: 16 May 2022, Last Modified: 05 May 2023AutoML-Conf 2022 (Main Track)Readers: Everyone
Abstract: Data pre-processing is one of the key steps in creating machine learning pipelines for tabular data. One of the common data pre-processing operations implemented in AutoML systems is to encode categorical features as numerical features. Typically, this is implemented using a simple alphabetical sort on the categorical values, using functions such as OrdinalEncoder, LabelEncoder in Scikit-Learn and H2O. However, often there exist semantic ordinal relationships among the categorical values, such as: quality level (i.e., ['very good' > 'good' > 'normal'> 'poor']), or month (i.e., ['Jan'< 'Feb' < 'Mar']). Such semantic relationships are not exploited by previous AutoML approaches. In this paper, we introduce BERT-Sort, a novel approach to semantically encode ordinal categorical values via zero-shot Masked Language Models (MLM) and apply it to AutoML for tabular data. We created a new benchmark of 42 features from 10 public data sets for sorting categorical ordinal values for the first time, where BERT-Sort significantly improves semantic encoding of ordinal values in comparison to the existing approaches with 27% improvement. We perform a comprehensive evaluation of BERT-Sort on different public MLMs, such as RoBERTa, XLM and DistilBERT. We also compare the performance of raw data sets against encoded data sets through BERT-Sort in different AutoML platforms including AutoGluon, FLAML, H2O, and MLJAR to evaluate the proposed approach in an end-to-end scenario, where BERT-Sort achieved a performance close to a hard encoded feature. The artifacts of BERT-Sort is available at https://github.com/marscod/BERT-Sort.
Keywords: AutoML, BERT, Semantic Sorting, Ordinal Encoder, MLM, Tabular Data
One-sentence Summary: We introduce BERT-Sort, a novel approach to semantically encode ordinal categorical values via zero-shot Masked Language Models (MLM) and apply it to AutoML for tabular data.
Track: Main track
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: Mehdi Bahrami, mbahrami@fujitsu.com
CPU Hours: 67
GPU Hours: 0
TPU Hours: 0
Evaluation Metrics: Yes
Class Of Approaches: Masked Language Model, BERT, Encoder
Datasets And Benchmarks: uci-audiology-original, Kaggle-cat-in-the-dat-ii, Kaggle-top-1000-highest-grossing-movies, uci_car_evaluation, uci_Coil_1999_Competition_Data, uci-automobile, uci_labor-relations, uci_Nursery, uci_Post-Operative-Patient, uci_Pittsburgh_Bridges
Performance Metrics: Accuracy, F1, Ordinal Accuracy (proposed)
Benchmark Performance: Ordinal Value Benchmark, 27% semantic ordinal accuracy Ordinal Value Benchmark, 55% accuracy
Benchmark Time: Ordinal Value Benchmark, 1 AutoML Ordinal Value Benchmark, 76
Main Paper And Supplementary Material: pdf
Steps For Environmental Footprint Reduction During Development: Consider pre-trained DistilBERT MLM as part of this evaluation of this study for processing BERT-Sort on lighter and faster models.
Estimated CO2e Footprint: 2.46
Code And Dataset Supplement: zip
7 Replies

Loading