The Decoding Problem and the Viterbi Algorithm next up previous
Next: The Learning Problem Up: Three basic problems of Previous: The Evaluation Problem and

The Decoding Problem and the Viterbi Algorithm

In this case We want to find the most likely state sequence for a given sequence of observations, tex2html_wrap_inline2682 and a model, tex2html_wrap_inline2762

The solution to this problem depends upon the way ``most likely state sequence'' is defined. One approach is to find the most likely state tex2html_wrap_inline2616 at t=t and to concatenate all such ' tex2html_wrap_inline2616 's. But some times this method does not give a physically meaningful state sequence. Therefore we would go for another method which has no such problems.
In this method, commonly known as Viterbi algorithm, the whole state sequence with the maximum likelihood is found. In order to facilitate the computation we define an auxiliary variable,

displaymath2770

which gives the highest probability that partial observation sequence and state sequence up to t=t can have, when the current state is i.
It is easy to observe that the following recursive relationship holds.

  equation322

where,

displaymath2778

So the procedure to find the most likely state sequence starts from calculation of tex2html_wrap_inline2780 using recursion in 1.8, while always keeping a pointer to the ``winning state'' in the maximum finding operation. Finally the state tex2html_wrap_inline2782 , is found where

displaymath2784

and starting from this state, the sequence of states is back-tracked as the pointer in each state indicates.This gives the required set of states.
This whole algorithm can be interpreted as a search in a graph whose nodes are formed by the states of the HMM in each of the time instant tex2html_wrap_inline2786 .



Narada Warakagoda
Fri May 10 20:35:10 MET DST 1996

Home Page