Abstract: Artificial Neural Networks (ANNs) are becoming pervasive technology and moving into all realms of socio-technical systems. At the same time, we also have seen an increased interest in ANNs for the functional safety domain. However, their limited assurance hinders their deployment, becoming a, if not the, show-stopper. Resolving the assurance of ANNss, in general, and Convolutional Neural Networks (CNNs)s, in particular, is the target of our research work. First results, while effective still quite inefficient, are reported. The approach presented is based on a novel concept of statistically analyzing the consensus agreement over bias-free model herds. While these non-optimized models are individually weak (or dumb), a herd of these models is employed to extract a common abstract view. In other words, we propose grouping a number of models with the aim of examining their consensus over groups of herds and, hence, being able not only to gauge assurance but also to provide quantitative assurance on new images. The main contribution of our work is to demonstrate how herds of weak models have the ability to identify out-of-scope images (i.e., images of untrained classes) and declare that they do not know what class an image belongs to, stating “I don't know” explicitly.
Loading