Abstract: Deep neural networks (DNNs) are trained under the assumption that the training data are independent and identically distributed. In the real world, autonomous systems typically receive out of distribution images causing a domain shift that hinders performance. One important consideration in the context of ordinal image data (i.e., their labels have an intrinsic order) is the choice of loss function and whether it takes the ordinality into account. In this paper, we examine the benefits of ordinal formulations over nominal classification using a human age estimation task. Experiments using DNNs with ordinal data suggest that performance on out of distribution data can be improved by over 240% if trained using ordinal regression methods as compared to classification.
Loading