zoukankan      html  css  js  c++  java
  • ML| EM

    What's xxx

    The EM algorithm is used to find the maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either there are missing values among the data, or the model can be formulated more simply by assuming the existence of additional unobserved data points. 

    The motivation is as follows. If we know the value of the parameters $oldsymbol heta$, we can usually find the value of the latent variables $mathbf{Z}$ by maximizing the log-likelihood over all possible values of $mathbf{Z}$, either simply by iterating over $mathbf{Z}$ or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables $mathbf{Z}$, we can find an estimate of the parameters $oldsymbol heta$ fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both $oldsymbol heta$ and $mathbf{Z}$ are unknown:

    1. First, initialize the parameters $oldsymbol heta$ to some random values.
    2. Compute the best value for $mathbf{Z}$ given these parameter values.
    3. Then, use the just-computed values of $mathbf{Z}$ to compute a better estimate for the parameters $oldsymbol heta$. Parameters associated with a particular value of $mathbf{Z}$ will use only those data points whose associated latent variable has that value.
    4. Iterate steps 2 and 3 until convergence.

    The algorithm as just described monotonically approaches a local minimum of the cost function, and is commonly called hard EM. The k-means algorithm is an example of this class of algorithms.

    However, we can do somewhat better by, rather than making a hard choice for $mathbf{Z}$ given the current parameter values and averaging only over the set of data points associated with a particular value of $mathbf{Z}$, instead determining the probability of each possible value of $mathbf{Z}$ for each data point, and then using the probabilities associated with a particular value of $mathbf{Z}$ to compute a weighted average over the entire set of data points. The resulting algorithm is commonly called soft EM, and is the type of algorithm normally associated with EM. 

    With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.

    Algorithm

    Given a statistical model consisting of a set $mathbf{X}$ of observed data, a set of unobserved latent data or missing values $mathbf{Z}$, and a vector of unknown parameters $oldsymbol heta$, along with a likelihood function $L(oldsymbol heta; mathbf{X}, mathbf{Z}) = p(mathbf{X}, mathbf{Z}|oldsymbol heta)$, the maximum likelihood estimate (MLE) of the unknown parameters is determined by the marginal likelihood of the observed data

    $L(oldsymbol heta; mathbf{X}) = p(mathbf{X}|oldsymbol heta) = sum_{mathbf{Z}} p(mathbf{X},mathbf{Z}|oldsymbol heta) $
    However, this quantity is often intractable (e.g. if $mathbf{Z}$ is a sequence of events, so that the number of values grows exponentially with the sequence length, making the exact calculation of the sum extremely difficult).

    The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:

    1. Expectation step (E step): Calculate the expected value of the log likelihood function, with respect to the conditional distribution of $mathbf{Z}$ given $mathbf{X}$ under the current estimate of the parameters $oldsymbol heta^{(t)}$:
    $Q(oldsymbol heta|oldsymbol heta^{(t)}) = operatorname{E}_{mathbf{Z}|mathbf{X},oldsymbol heta^{(t)}}left[ log L (oldsymbol heta;mathbf{X},mathbf{Z}) ight] \,$
    2. Maximization step (M step): Find the parameter that maximizes this quantity:
    $oldsymbol heta^{(t+1)} = underset{oldsymbol heta}{operatorname{arg\,max}} Q(oldsymbol heta|oldsymbol heta^{(t)}) \, $
    Note that in typical models to which EM is applied:

    • The observed data points $mathbf{X}$ may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). There may in fact be a vector of observations associated with each data point.
    • The missing values (aka latent variables) $mathbf{Z}$ are discrete, drawn from a fixed number of values, and there is one latent variable per observed data point.
    • The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and parameters associated with a particular value of a latent variable (i.e. associated with all data points whose corresponding latent variable has a particular value).
  • 相关阅读:
    复习一下 .Net: delegate(委托)、event(事件) 的基础知识,从头到尾实现事件!
    雕虫小技: 给枯燥的 .Net 控制台程序(字符界面)来点儿心跳 (关于退格 '\b' 的使用)
    .Net Remoting 事件回调 Client 函数方法完整实例: C# 实现控制台网络聊天室 (Console Remoting ChatRoom)
    一气呵成得到 MSSQL DB 中所有表的字段默认值约束的 DDL SQL 脚本
    根据数据生成 INSERT INTO ... 的 SQL (.Net C#, TSQL Store Procedure 分别实现)
    .Net/C# 与 J2EE/Java Web Service 互操作完整实例
    TSQL: 关于 Varbinary(Hex,Int) 与 Varchar(HexString) 之间的(数据类型)转换
    Linux零碎记录之ulimit【堆栈大小、stack size、进程数限制、文件句柄限制、linux用户空间限制】
    svn之svn:ignore命令行设置
    C语言零碎记录之extern
  • 原文地址:https://www.cnblogs.com/linyx/p/3856131.html
Copyright © 2011-2022 走看看