Perceptrons Under Verifiable Random Data Corruption

Published: 01 Jan 2023, Last Modified: 12 Jan 2025LOD (1) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We study perceptrons when datasets are randomly corrupted by noise and subsequently such corrupted examples are discarded from the training process. Overall, perceptrons appear to be remarkably stable; their accuracy drops slightly when large portions of the original datasets have been excluded from training as a response to verifiable random data corruption. Furthermore, we identify a real-world dataset where it appears to be the case that perceptrons require longer time for training, both in the general case, as well as in the framework that we consider. Finally, we explore empirically a bound on the learning rate of Gallant’s “pocket” algorithm for learning perceptrons and observe that the bound is tighter for non-linearly separable datasets.
Loading