zoukankan      html  css  js  c++  java
  • Factor Analysis

    When we need Factor Analysis:

    Factor Analysis: If we have very few data, however we still wanna to estimate it though Gaussian distribution, unfortunately, if we adopt Gaussian directly we would get pretty bad consequence, the result Gaussian would be affine within the data spanned space(long and thin).


    Sure we could also fit it into a single Gaussian, perhaps with a diagonal covariance or even identical covariance, otherwise the covariance matrix would be singular( for the reason that training data is no more than the features). With these kinds of restricts on the covariance matrix, we ignored the relationships between different features which may be very interesting relations.
    And Factor Analysis helps us fit the rare training data with multi-Gaussian distribution and retain some kinds of interesting relations between features.


    In this note we also used EM to solve the problem, and this time the distribution of z is assumed by Standard Normal Gaussian distribution which is a continuous distribution.

    The intuition of Factor Analysis:

    As for the number of input data in much less than the number of features. We assume z(i) belongs to a k dimensional multivariate gaussian distribution, and then map x(i) into a smaller feature space(k dimensional space) through Λz(i), and then shif it with μ:μ+Λz(i). Through this way, we mapped the original features into a samller feature space.  Finally we add a covariance Ψ noise to μ+Λz(i), and we got: x=μ+Λz(i)+Ψ. 

  • 相关阅读:
    【水】希望之花
    如何不用狄利克雷卷积证明莫比乌斯函数性质二
    【数学】gcd
    挂分宝典
    [luogu P6042]「ACOI2020」学园祭 题解
    [luogu P6041]「ACOI2020」布丁暗杀计划 题解
    11.19模拟
    「CSP-S2020」题解
    11.11模拟
    「洛谷P1445」樱花
  • 原文地址:https://www.cnblogs.com/flytomylife/p/3106877.html
Copyright © 2011-2022 走看看