zoukankan      html  css  js  c++  java
  • Machine Learning

    Machine Learning

    Linear Regression

    1. hypothesis: (h_ heta(x)=sum_{i=0}^{m} heta_ix_i), where (x_0=1)

    2. Cost Function: (J( heta)=frac{1}{2}sum_{i=1}^{n}(h_{ heta}(x)-y)^2=frac{1}{2}(X{ heta}-Y)^T(X{ heta}-Y))

    3. Two methods for minimizing (J( heta)):

      (1) Close-form Solution: ({ heta}=(X^{T}X)^{-1}X^{T}Y)

      (2) Gradient Descent: repeat ({ heta}:={ heta}-alphafrac{partial}{ heta}J( heta)={ heta}-alphasum_{i=1}^{n}(h_{ heta}(x^{(i)})-y^{(i)})x^{(i)}={ heta}-{alpha}{X^T}(X{ heta}-Y))

    ​ Normalize the data in order to accelerate gradient descent: (x:=(x-mu)/(max-min)) or (x:=(x- min)/(max-min))

    1. python code for the question

      (1) close-form solution:

      from numpy import*;
      import numpy as np;
      X = [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013];
      XX = [1,1,1,1,1,1,1,1,1,1,1,1,1,1];
      Y = [2.000, 2.500, 2.900, 3.147, 4.515, 4.903, 5.365, 5.704, 6.853, 7.971, 8.561, 10.000, 11.280, 12.900];
      xx=X;
      yy=Y;
      X = mat([XX,X]);
      Y = mat(Y);
      XT = X.T;
      tmp = X*XT;
      tmp=tmp.I;
      theta = tmp*X;
      theta = theta*Y.T;
      print(theta);
      theta0 = theta[0][0];
      theta1 = theta[1][0];
      print(2014*theta1+theta0);
      

      (2) gradient descent:

      from numpy import *;
      import numpy as np;
      def getsum(theta,X,Y):
          return X*(theta.T*X-Y).T;
      X = [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013];
      X = mat([mat(ones(1,14)),X]);
      Y = [2.000, 2.500, 2.900, 3.147, 4.515, 4.903, 5.365, 5.704, 6.853, 7.971, 8.561, 10.000, 11.280, 12.900];
      alpha = 0.01;
      theta = mat(zeros(2,1));
      X /= 2000; Y /= 12;
      las = 0;
      while true:
          theta -= alpha*getsum(theta,X,Y);
          if(abs(las-theta[0][0])<=1e-6): break;
          las = theta[0][0];
      print(theta);
      print(theta[1][0]*2014+theta[0][0]);
      

    Logistic Regression

    Binary Classification

    Hypothesis:

    Define (delta(x)=frac{1}{1+e^{-x}})

    (h_{ heta}(x)=delta({ heta}^{T}x)=frac{1}{1+e^{-{ heta}^{T}x}})

    Actually, (h_{ heta}(x)) can be seen as the probility of y to be equal to 1, that is, (p(y=1|x, heta))

    Cost Function:

    [cost(h_{ heta}(x),y)=egin{cases}-ln(h_{ heta}(x)),spacespacespace y=1\-ln(1-h_{ heta}(x)), spacespacespace y=0\end{cases}=-yln(h_{ heta}(x))-(1-y)ln(1-h_{ heta}(x)) ]

    [J( heta)=sum_{i=1}^{n}cost(h_{ heta}(x^{(i)}),y^{(i)}) ]

    Gradient descent to minimize (J( heta))

    repeat: $${ heta}:={ heta}-{alpha}sum_{i=1}{n}(h_{ heta}(x{(i)})-y{(i)})x{(i)}$$

  • 相关阅读:
    Lombok 安装、入门
    详细解析@Resource和@Autowired的区别 , 以及@Qualifier的作用
    Spring中@Resource与@Autowired、@Qualifier的用法与区别
    springMVC整合swagger
    jetty maven插件
    【原创】Sagger使用
    Eclipse详细设置护眼背景色和字体颜色
    eclipse中相同代码的高亮显示
    Mybatis分页插件
    mybatis
  • 原文地址:https://www.cnblogs.com/vege-chicken-rainstar/p/11974022.html
Copyright © 2011-2022 走看看