Abstract: As a newly proposed neural network, broad learning system (BLS) has superior performance and frugal time consumption. However, the degradation problem is unavoidable for BLS, since the parameter randomness leads to indetermination of the linear independence of the new nodes and the original nodes. Regrettably, there is no work until now to solve the degradation problem of broad networks. In order to address it, this paper proposes a new incremental mechanism to construct a sequence of residual operators with norm convergence for BLS, and this new learning system is called the broad residual learning system (BRLS). It is proved that he norm convergence of the residual operators can prevent network performance deterioration with the progress of network learning to effectively mitigate the degradation problem. Also, the basic BRLS and three incremental BRLS networks (adding enhancement nodes, feature nodes, and input data) are introduced to satisfy the various scenarios requirements. By the analysis and comparison of time complexity between BRLS and BLS, one can show the novel BRLS incremental mechanism has great advantages of fast computing speed and frugal computing memory. In addition, the convergence theorem and universal approximation property of BRLS are proved to ensure the feasibility and effectiveness. Validation experiments are performed on publicly available datasets to demonstrate effectiveness of the proposed methods.
External IDs:dblp:journals/pr/SuLLC26
Loading