Efficient Bayesian Sequential Classification Under the Markov Assumption for Various Loss FunctionsDownload PDFOpen Website

Published: 01 Jan 2020, Last Modified: 12 May 2023IEEE Signal Process. Lett. 2020Readers: Everyone
Abstract: Optimal sequential classification theory generalizes naturally from that of single-hypothesis classification. However, longer sequences require characterization with higher-order probability distributions; consequently, the design of optimal classifiers can become computationally prohibitive for real-time applications. A Markov assumption for the class random process provides structure to the distribution, enabling efficient evaluation with a limited number of arithmetic operations; the Viterbi algorithm exploits this property to provide a computationally inexpensive maximum a posteriori classifier when the Markov process is first-order. This letter extends this classical result for general order Markov processes and assesses the additional computational cost in terms of multiplication operations. Additionally, a second efficient algorithm is provided which determines the optimal hypothesis if a sample-level loss function is used. In contrast to the Viterbi classifier which implicitly penalizes all erroneous sequences equally, this classifier judges misclassifications more gradually, providing higher flexibility for challenging classification tasks.
0 Replies

Loading