Extracting and Composing Robust Features With Broad Learning System

Published: 01 Jan 2023, Last Modified: 15 Nov 2024IEEE Trans. Knowl. Data Eng. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With effective performance and fast training speed, broad learning system (BLS) has been widely developed in recent years, which provides a new way for network training. However, the randomly generated feature nodes and enhancement nodes in the BLS network may have redundant and inefficient features, which will affect the subsequent classification performance. In response to the above issues, we propose a series of self-encoding networks based on BLS from the perspective of unsupervised feature extraction. These include the single hidden layer autoencoder built on the basis of BLS(BLS-AE), the stacked BLS-based autoencoder (ST-BLS), the sparse BLS-based autoencoder (SP-BLS), and the stacked sparse BLS-based autoencoder(SS-BLS). The proposed BLS-based self-encoding networks retain the advantage of efficient BLS model training, and overcome the time-consuming defect of iterative parameter optimization in traditional self-encoding networks. In addition, the higher-level abstract features of the input data can be learned through the progressive encoding and decoding process. Combining $L_1$ regularization to train the parameters can further enhance the robustness of the extracted features. Extensive comparative experiments on real-world data sets demonstrate the superiority of the proposed methods in terms of both effectiveness and efficiency.
Loading