Keywords: Deep Learning, Neural Networks
Abstract: Artificial neural networks (ANNs) are typically designed as a tree structure to mimic neural networks. However, this structure has limitations as nodes on the same level cannot communicate with each other, resulting in deficiencies. To address this, we proposed the Yoke neural networks (YNN) model, which connected nodes bidirectionally in a complete graph, forming a neural module. YNN improved information transfer and eliminated structural bias more effectively than traditional ANNs. Although ANN structures have advanced to more complex structures such as Directed Acyclic Graph (DAG) in recent years, these methods also exhibit unidirectional and acyclic biases for ANN. Compared to traditional ANN, our YNN can emulate neural networks more effectively. In this study, we analyzed the limitations of existing ANN structures and attached an auxiliary sparsity constraint to the distribution of connectedness to focus on critical connections. Unlike traditional structures which are a flow of tensors, with each unit representing a model level and each level being a tensor with independent elements, YNN treated each tensor in the flow as a whole to represent an information unit since the elements in the tensor interacted with each other. Moreover, based on the optimized structure, we designed a neural module structure using the minimum cut technique to reduce the calculation of the YNN model. This learning process was compatible with existing networks and various tasks that efficiently eliminated structural bias. The quantitative results of our experiments indicated that the learned connectivity was superior to the traditional neural network structure.
Supplementary Material: zip
Submission Number: 11906
Loading