Use of HMMs in continuous recognition next up previous
Next: Statistical Language models Up: Hidden Markov Models Previous: recognition

Use of HMMs in continuous recognition

  In the isolated mode we used one HMM for each of the speech unit. But in the continuous case this is not possible because a sequence of connected speech units, which is usually called a sentence, is to be recognized and hence the number of possible sentences may be prohibitively large even for a small vocabulary. In addition to this, there are two other fundamental problems associated with continuous recognition.

(1)
We do not know the end points of the speech units contained in the sentence.
(2)
We do not know how many speech units are contained in the sentence.

Because of the problems including those mentioned above, continuous recognition is more complicated than the isolated recognition. However HMMs provide a good frame work for continuous mode of speech recognition. In this case we connect the HMMs for each speech unit in a particular sentence to form a large HMM for the sentence, as depicted in fig 1.2. This represents the clamped grammar case. The HMM, which represents the free grammar case, is obtained by connecting all the speech units in the vocabulary as shown in fig 1.3.The transitions between speech units are determined using the so called language model.

   figure1170
Figure 1.2: A sentence model using HMMs, the clamped grammar

   figure1206
Figure 1.3: A language model (with vocabulary size 4) using HMMs, the free grammar. Transition probabilities are not shown.





Narada Warakagoda
Fri May 10 20:35:10 MET DST 1996

Home Page