zoukankan      html  css  js  c++  java
  • ML| EM

    What's xxx

    The EM algorithm is used to find the maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either there are missing values among the data, or the model can be formulated more simply by assuming the existence of additional unobserved data points. 

    The motivation is as follows. If we know the value of the parameters $oldsymbol heta$, we can usually find the value of the latent variables $mathbf{Z}$ by maximizing the log-likelihood over all possible values of $mathbf{Z}$, either simply by iterating over $mathbf{Z}$ or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables $mathbf{Z}$, we can find an estimate of the parameters $oldsymbol heta$ fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both $oldsymbol heta$ and $mathbf{Z}$ are unknown:

    1. First, initialize the parameters $oldsymbol heta$ to some random values.
    2. Compute the best value for $mathbf{Z}$ given these parameter values.
    3. Then, use the just-computed values of $mathbf{Z}$ to compute a better estimate for the parameters $oldsymbol heta$. Parameters associated with a particular value of $mathbf{Z}$ will use only those data points whose associated latent variable has that value.
    4. Iterate steps 2 and 3 until convergence.

    The algorithm as just described monotonically approaches a local minimum of the cost function, and is commonly called hard EM. The k-means algorithm is an example of this class of algorithms.

    However, we can do somewhat better by, rather than making a hard choice for $mathbf{Z}$ given the current parameter values and averaging only over the set of data points associated with a particular value of $mathbf{Z}$, instead determining the probability of each possible value of $mathbf{Z}$ for each data point, and then using the probabilities associated with a particular value of $mathbf{Z}$ to compute a weighted average over the entire set of data points. The resulting algorithm is commonly called soft EM, and is the type of algorithm normally associated with EM. 

    With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.

    Algorithm

    Given a statistical model consisting of a set $mathbf{X}$ of observed data, a set of unobserved latent data or missing values $mathbf{Z}$, and a vector of unknown parameters $oldsymbol heta$, along with a likelihood function $L(oldsymbol heta; mathbf{X}, mathbf{Z}) = p(mathbf{X}, mathbf{Z}|oldsymbol heta)$, the maximum likelihood estimate (MLE) of the unknown parameters is determined by the marginal likelihood of the observed data

    $L(oldsymbol heta; mathbf{X}) = p(mathbf{X}|oldsymbol heta) = sum_{mathbf{Z}} p(mathbf{X},mathbf{Z}|oldsymbol heta) $
    However, this quantity is often intractable (e.g. if $mathbf{Z}$ is a sequence of events, so that the number of values grows exponentially with the sequence length, making the exact calculation of the sum extremely difficult).

    The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:

    1. Expectation step (E step): Calculate the expected value of the log likelihood function, with respect to the conditional distribution of $mathbf{Z}$ given $mathbf{X}$ under the current estimate of the parameters $oldsymbol heta^{(t)}$:
    $Q(oldsymbol heta|oldsymbol heta^{(t)}) = operatorname{E}_{mathbf{Z}|mathbf{X},oldsymbol heta^{(t)}}left[ log L (oldsymbol heta;mathbf{X},mathbf{Z}) ight] \,$
    2. Maximization step (M step): Find the parameter that maximizes this quantity:
    $oldsymbol heta^{(t+1)} = underset{oldsymbol heta}{operatorname{arg\,max}} Q(oldsymbol heta|oldsymbol heta^{(t)}) \, $
    Note that in typical models to which EM is applied:

    • The observed data points $mathbf{X}$ may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). There may in fact be a vector of observations associated with each data point.
    • The missing values (aka latent variables) $mathbf{Z}$ are discrete, drawn from a fixed number of values, and there is one latent variable per observed data point.
    • The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and parameters associated with a particular value of a latent variable (i.e. associated with all data points whose corresponding latent variable has a particular value).
  • 相关阅读:
    Machine Learning/Introducing Logistic Function
    学习Machine Leaning In Action(四):逻辑回归
    极致和厚道为小米新国货的核心实质(极致不是指最好的,而是要超乎预期)
    做事要提前留有余地,否则到时候就来不及了(超市买菜,找工作,交朋友,发脾气都是如此)
    UpdateWindow API函数的作用很明显
    近几年前端技术盘点以及 2016 年技术发展方向
    使用SetLocaleInfo设置时间后必须调用广播WM_SETTINGCHANGE,通知其他程序格式已经更改
    设计模式——(Abstract Factory)抽象工厂
    从优化到再优化,最长公共子串
    与数据库打交道的Adapter----SimpleCursorAdapter
  • 原文地址:https://www.cnblogs.com/linyx/p/3856131.html
Copyright © 2011-2022 走看看