Abstract: This paper introduces a framework for building probabilistic models with subsequential failure transducers. We first show how various types of subsequential transducers commonly used in natural language processing are represented by probabilistic and conditional probabilistic subsequential failure transducers. Afterwards we introduce efficient algorithms for composition of conditional probabilistic subsequential transducers with probabilistic subsequential failure transducers and weight pushing (canonization) of probabilistic subsequential failure transducers. Those algorithms are applicable to many tasks for representing probabilistic models with subsequential failure transducers. One such task is the construction of the \(HCLG\) weighted transducer used in speech recognition which we describe in detail. At the end, empirical results and comparison between the presented \(HCLG\) failure weighted transducer and the standard \(HCLG\) weighted transducer constructions are shown.
Loading