Breaking the Structure of Multilayer Perceptrons with Complex Topologies

Published: 18 Jun 2023, Last Modified: 04 Jul 2023TAGML2023 PosterEveryoneRevisions
Keywords: neural networks, complex networks, bio-inspired computing, graph topology, manifold learning
TL;DR: The study explores the use of directed acyclic graphs (DAGs) in neural networks, finding that complex DAG-based networks outperform MLPs in accuracy, particularly in challenging scenarios.
Abstract: Recent advances in neural network (NN) architectures have demonstrated that complex topologies possess the potential to surpass the performance of conventional feedforward networks. Nonetheless, previous studies investigating the relationship between network topology and model performance have yielded inconsistent results, complicating their applicability in contexts beyond those scrutinized. In this study, we examine the utility of directed acyclic graphs (DAGs) for modeling intricate relationships among neurons within NNs. We introduce a novel algorithm for the efficient training of DAG-based networks and assess their performance relative to multilayer perceptrons (MLPs). Through experimentation on synthetic datasets featuring varying levels of difficulty and noise, we observe that complex networks founded on pertinent graphs outperform MLPs in terms of accuracy, particularly within high-difficulty scenarios. Additionally, we explore the theoretical underpinnings of these observations and explore the potential trade-offs associated with employing complex networks. Our research offers valuable insights into the capabilities and constraints of complex NN architectures, thus contributing to the ongoing pursuit of designing more potent and efficient deep learning models.
Supplementary Materials: zip
Type Of Submission: Proceedings Track (8 pages)
Submission Number: 25
Loading