Analytical Comparison Between the Pattern Classifiers Based upon a Multilayered Perceptron and Probabilistic Neural Network in Parallel Implementation

Published: 14 Sept 2022, Last Modified: 14 Apr 2025Lecture Notes in Computer ScienceEveryoneCC BY 4.0
Abstract: It is well known that, while the training mode of probabilistic neural network is quickly completed by a straightforward manner of allocating the units in a single hidden layer, its utility in the reference mode is slow in ordinary serial computing situations. In order to alleviate the slow operation, the parallel implementation is thus considered to be a desirable option. In this paper, we first quantify the overall amount of the step-wise operations required for the reference mode of a probabilistic neural network and that of a multilayered perceptron (or deep) neural network, both implemented in a parallel environment. Second, we derive the necessary condition for the reference mode of a probabilistic neural network to yield an equally fast as or faster operation than that of a deep neural network. Based upon the condition so derived, we next deduce a comparative relation between the training mode of a probabilistic neural network, where k- means clustering algorithm is applied to reduce the number of the hidden units, and that of a deep neural network operated in parallel. It is then shown that both the training and testing modes of the compact-sized network meeting these criteria can be run in a parallel environment as fast as or faster than those of a feed-forward deep neural network, while keeping a reasonably high classification performance.
Loading