DA-BAG: A multi-model fusion text classification method combining BERT and GCN using self-domain adversarial training

Published: 01 Jan 2025, Last Modified: 22 Jul 2025J. Intell. Inf. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Pre-training-based methods are considered some of the most advanced techniques in natural language processing tasks, particularly in text classification. However, these methods often overlook global semantic information. In contrast, traditional graph learning methods focus solely on structured information from text to graph, neglecting the hidden local information within the syntactic structure of the text. When combined, these approaches may introduce new noise and training biases. To tackle these challenges, we introduce DA-BAG, a novel approach that co-trains BERT and graph convolution models. Utilizing a self-domain adversarial training method on a single dataset, DA-BAG extracts multi-domain distribution features across multiple models, enabling self-adversarial domain adaptation training without the need for additional data, thereby enhancing model generalization and robustness. Furthermore, by incorporating an attention mechanism in multiple models, DA-BAG effectively combines the structural semantics of the graph with the token-level semantics of the pre-trained model, leveraging hidden information within the text’s syntactic structure. Additionally, a sequential multi-layer graph convolutional neural(GCN) connection structure based on a residual pre-activation variant is employed to stabilize the feature distribution of graph data and adjust the graph data structure accordingly. Extensive evaluations on 5 datasets(20NG, R8, R52, Ohsumed, MR) demonstrate that DA-BAG achieves state-of-the-art performance across a diverse range of datasets.
Loading