zoukankan      html  css  js  c++  java
  • HMM学习(4)-Hidden Markov Models

    HMM学习(4)-Hidden Markov Models

    分类: HMM学习 2007-12-19 22:13 1497人阅读 评论(0) 收藏 举报

    4 Hidden Markov Models

    4.1 Definition of a hidden Markov model

    A hidden Markov model (HMM) is a triple ( ,A,B).

    隐马尔科夫模型是一个三元组。

     

    the vector of the initial state probabilities;

     

    the state transition matrix;

     
     

    the confusion matrix;

     

    Each probability in the state transition matrix and in the confusion matrix is time independent - that is, the matrices do not change in time as the system evolves. In practice, this is one of the most unrealistic assumptions of Markov models about real processes.

    在状态转移矩阵以及混合矩阵中的每一个概率都是与时间无关的,也就是这些矩阵在系统演化的时候并不随着时间而改变。在实践中,这是一个马尔科夫模型对与实际过程的一个最不现实的假设之一。

    4.2 Uses associated with HMMs

    Once a system can be described as a HMM, three problems can be solved. The first two are pattern recognition problems: Finding the probability of an observed sequence given a HMM (evaluation); and finding the sequence of hidden states that most probably generated an observed sequence (decoding). The third problem is generating a HMM given a sequence of observations (learning).

    一旦系统被描述为了HMM,三个问题可以被解决。前两个是模式识别的问题:在某个HMM下,找到一个观察序列的概率;最有可能产生出一个观察序列的隐状态序列。第三个问题是给出观察的序列产生HMM

    1. Evaluation

    Consider the problem where we have a number of HMMs (that is, a set of ( ,A,B) triples) describing different systems, and a sequence of observations. We may want to know which HMM most probably generated the given sequence. For example, we may have a `Summer' model and a `Winter' model for the seaweed, since behaviour is likely to be different from season to season - we may then hope to determine the season on the basis of a sequence of dampness observations.

    考虑这样一个问题,我们有一些HMM(( ,A,B)三元组的集合)描述了不同的系统,以及一个观察序列。我们可能想要知道那一个HMM最有可能产生出了给定的序列。例如,我们对于海藻有一个夏天的模型和一个冬天的模型,因为不同季节的行为是不同的,我们希望能够根据潮湿程度的观察序列来确定是那一个季节

    We use the forward algorithm to calculate the probability of an observation sequence given a particular HMM, and hence choose the most probable HMM.

     

    我们使用前向算法去计算在某个特定的HMM之下一个观察序列的概率,由此选出最有可能的HMM

    This type of problem occurs in speech recognition where a large number of Markov models will be used, each one modelling a particular word. An observation sequence is formed from a spoken word, and this word is recognised by identifying the most probable HMM for the observations.

    这种类型的问题出现在需要使用大量马尔科夫模型的语言识别中,每一个模型对特定的一个词建模。说出的词语中产生一个观察序列,并且通过从观察中找出可能性最大的HMM,从而辨识出词语。

    2. Decoding

    Finding the most probable sequence of hidden states given some observations

     

    Another related problem, and the one usually of most interest, is to find the hidden states that generated the observed output. In many cases we are interested in the hidden states of the model since they represent something of value that is not directly observable.

     

    另一个相关的问题,并且非常有趣,是找出产生观察结果的隐状态。在很多的例子中,我们对于模型中的隐状态非常感兴趣,因为他们代表了一些非常有价值但却不能直接被观察到的东西。

     

    Consider the example of the seaweed and the weather; a blind hermit can only sense the seaweed state, but needs to know the weather, i.e. the hidden states.

     

    考虑海藻和天气的例子;一个失明的隐士能够感受海藻的状态,却不能看到天气,这里的天气的状态也就是所谓的隐状态

     

    We use the Viterbi algorithm to determine the most probable sequence of hidden states given a sequence of observations and a HMM.

     

    我们使用Viterbi算法,在给出了HMM和观察序列的情况下,确定可能性最大的隐状态序列。

     

    Another widespread application of the Viterbi algorithm is in Natural Language Processing, to tag words with their syntactic class (noun, verb etc.) The words in a sentence are the observable states and the syntactic classes are the hidden states (note that many words, such as wind, fish, may have more than one syntactical interpretation). By finding the most probable hidden states for a sentence of words, we have found the most probable syntactic class for a word, given the surrounding context. Thereafter we may use the primitive grammar so extracted for a number of purposes, such as recapturing `meaning'.

     

    Viterbi算法另一个广泛应用的领域是自然语言处理,标记词语的语义类别(名词,动词等)句子中的词语是可观察状态,而语义的分类都是隐状态(注意很多的词语可能都不只属于一种语义类别(词性))。通过找到句子中词语最有可能的隐状态,我们就已经找到一个词在上下文中最可能的词性。这样我们可以在很多地方准确的使用这个初级的语法,比如recapturing `meaning'

    3 Learning

    Generating a HMM from a sequence of obersvations

     

    The third, and much the hardest, problem associated with HMMs is to take a sequence of observations (from a known set), known to represent a set of hidden states, and fit the most probable HMM; that is, determine the ( ,A,B) triple that most probably describes what is seen.

    第三,也是最难的地方,HMM存在的难题就是从(已知集合中)获取观察序列,如何表示隐状态,找到最合适的HMM;也就是决定最能描述观察结果的三元组( ,A,B)

    The forward-backward algorithm is of use when the matrices A and B are not directly (empirically) measurable, as is very often the case in real applications.

    AB不能被直接测量的时候(这也是现实中常见的情况),前向后向算法将发挥作用。

     

    4.3 Summary

    HMMs, described by a vector and two matrices ( ,A,B) are of great value in describing real systems since, although usually only an approximation, they are amenable to analysis. Commonly solved problems are:

    HMM,由一个向量和两个矩阵所描述,在描述现实系统的时候有着巨大的价值,因为即使常常仅是一个近似,但是他们是经得起分析的。通常解决问题的方法有:

    Matching the most likely system to a sequence of observations -evaluation, solved using the forward algorithm;

    determining the hidden sequence most likely to have generated a sequence of observations - decoding, solved using the Viterbi algorithm;

    determining the model parameters most likely to have generated a sequence of observations - learning, solved using the forward-backward algorithm.

  • 相关阅读:
    回老家
    防疫针
    平安夜
    虎威威
    圣诞联欢会
    小老虎飞船
    电子积木
    打印
    周日大悦城
    又一年毕业季
  • 原文地址:https://www.cnblogs.com/hyubz/p/3620384.html
Copyright © 2011-2022 走看看