Minimum Bayes Error Feature Selection for Continuous Speech RecognitionDownload PDFOpen Website

2000 (modified: 11 Nov 2022)NIPS 2000Readers: Everyone
Abstract: We consider the problem of designing a linear transformation () E lRPx n, of rank p ~ n, which projects the features of a classifier x E lRn onto y = ()x E lRP such as to achieve minimum Bayes error (or probabil(cid:173) ity of misclassification). Two avenues will be explored: the first is to maximize the ()-average divergence between the class densities and the second is to minimize the union Bhattacharyya bound in the range of (). While both approaches yield similar performance in practice, they out(cid:173) perform standard LDA features and show a 10% relative improvement in the word error rate over state-of-the-art cepstral features on a large vocabulary telephony speech recognition task.
0 Replies

Loading