Enhancing Gradient Boosting Machines with Attention

TMLR Paper3690 Authors

14 Nov 2024 (modified: 28 Nov 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Gradient boosting machines (GBMs) are a popular machine learning model, well known for their high accuracy and flexibility. Despite significant research in the past two decades improving their accuracy, speed, and robustness, there still lies room for improvement. Notably, with their ability to interpret complex patterns in noisy data. To address this challenge, this paper proposes AMBeRBoost: a novel model that integrates neural attention mechanisms into a GBM, aiming to help the model “focus” on important data and improve its predictive performance on otherwise hard to predict datasets. A series of experiments were performed to evaluate the effects of the attention mechanism, along with the performance of AMBeRBoost against other state-of-the-art models across several publicly available datasets. The results show that AMBeRBoost consistently outperforms the attentionless baseline model on almost all metrics, with results comparable to, and sometimes even exceeding, state-of-the-art models. This research contributes to the continuous improvement and refinement of machine learning models by bridging the gap between GBMs and neural attention mechanisms.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=jZzUzZIyni
Changes Since Last Submission: Fixed formatting, including font size.
Assigned Action Editor: ~Chinmay_Hegde1
Submission Number: 3690
Loading