Keywords: tabular learning, anomaly detection
Abstract: Tabular anomaly detection (TAD) is an important task in machine learning, since many real-world datasets are represented in tabular form. However, it remains challenging due to the lack of labeled anomalies and the heterogeneous nature of features. Although many deep learning methods have been developed for TAD, most still rely on simple multilayer perceptrons (MLPs), overlooking architectural design, and in some cases even underperform traditional machine learning methods such as KNN. Motivated by this, we propose LATTE, a simple yet effective reconstruction-based framework that introduces (i) an attention-based bottleneck to capture inter-column dependencies and (ii) a learnable memory bank, inspired by KNN, to retrieve prototypical normal patterns and amplify anomaly signals. By unifying these components, LATTE achieves consistently outperforms state-of-the-art methods on standard TAD benchmarks.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 23784
Loading