zoukankan      html  css  js  c++  java
  • 2018 10-708 (CMU) Probabilistic Graphical Models {Lecture 5} [Algorithms for Exact Inference]

     Not in the frontier of research, but the results are used commonly now.

    X_{k+1} - X_n are known,

    to calculate the joint probability, we have to do inference

     

     Recent research: on the approximate inference teches

    approx:

    1) optimization-based

    2) sampling-based

     

    Compare the computational complexity:

        NAIVE way: K^n

        Chain rule:  n*K^2

        n=4

     

    Chain rule derivation:

     

     

     

     

     

     

     

    marginalizing out the rest 

     P(a) P(b) P(c|b) ...... P(h|e,f) => a,b,c,d,e,f,g,h (elimation sequence)

     

    introduce a term m_h(e,f) to make e and f dependent

    not introducing any dependency here

    Different elimination sequence will lead to different computational complexity.

    It's dependent on how large the new clique is

    In one step, if you connect every vertex, then you are in trouble.

     

      

    if there's a loop in graph, you can't view message passing as a variable to another variable, but clique to clique.

  • 相关阅读:
    I Hate It
    满减优惠[Offer收割]编程练习赛4
    积水的城市 hiho[Offer收割]编程练习赛4
    Subsequence 尺取法
    526. 优美的排列
    401. 二进制手表
    306. 累加数
    216. 组合总和 III
    131. 分割回文串
    ubuntu deepin-软件 分辨率的问题
  • 原文地址:https://www.cnblogs.com/ecoflex/p/10231273.html
Copyright © 2011-2022 走看看