Abstract: Bayesian neural networks (BNNs) are competitive multi-class classifiers with reasonably sound probabilistic characteristics. In this paper, we consider BNNs as classifier ensembles obtained through sampling, and define the inference problem in BNNs as 1. computing a median probability distribution from the ensemble of distributions, under a given distance, and 2. finding a Bayes-optimal prediction (BOP) under the evaluation metric given the median. With this (re)formulation, all the results related to computing medians of sets of probability distributions can be leveraged to strengthen the predictive performance of BNNs. We shall recall a generic formulation of the problem of computing medians and provide empirical evidence to illustrate the potential impact of the choice of distance regarding accuracy metrics and calibration errors.
External IDs:dblp:conf/iukm/TranHNDH25
Loading