zoukankan      html  css  js  c++  java
  • Factor Analysis

    When we need Factor Analysis:

    Factor Analysis: If we have very few data, however we still wanna to estimate it though Gaussian distribution, unfortunately, if we adopt Gaussian directly we would get pretty bad consequence, the result Gaussian would be affine within the data spanned space(long and thin).


    Sure we could also fit it into a single Gaussian, perhaps with a diagonal covariance or even identical covariance, otherwise the covariance matrix would be singular( for the reason that training data is no more than the features). With these kinds of restricts on the covariance matrix, we ignored the relationships between different features which may be very interesting relations.
    And Factor Analysis helps us fit the rare training data with multi-Gaussian distribution and retain some kinds of interesting relations between features.


    In this note we also used EM to solve the problem, and this time the distribution of z is assumed by Standard Normal Gaussian distribution which is a continuous distribution.

    The intuition of Factor Analysis:

    As for the number of input data in much less than the number of features. We assume z(i) belongs to a k dimensional multivariate gaussian distribution, and then map x(i) into a smaller feature space(k dimensional space) through Λz(i), and then shif it with μ:μ+Λz(i). Through this way, we mapped the original features into a samller feature space.  Finally we add a covariance Ψ noise to μ+Λz(i), and we got: x=μ+Λz(i)+Ψ. 

  • 相关阅读:
    关于JDK 安装,以及Java环境的设置
    DHCP snooping
    解除破解正版Kindle电子书籍的版权限制
    广东地区电信官方DNS服务器
    Bash脚本15分钟进阶指导
    视听说英语
    华中师大2013新生群
    【强网杯2018】Gamebox
    【强网杯2018】逆向hide
    【Wechall.net挑战】Anderson Application Auditing
  • 原文地址:https://www.cnblogs.com/flytomylife/p/3106877.html
Copyright © 2011-2022 走看看