zoukankan      html  css  js  c++  java
  • Andrew Ng机器学习 二: Logistic Regression

    一:逻辑回归(Logistic Regression)

      背景:假设你是一所大学招生办的领导,你依据学生的成绩,给与他入学的资格。现在有这样一组以前的数据集ex2data1.txt,第一列表示第一次测验的分数,第二列表示第二次测验的分数,第三列1表示允许入学,0表示不允许入学。现在依据这些数据集,设计出一个模型,作为以后的入学标准。

      

      我们通过可视化这些数据集,发现其与某条直线方程有关,而结果又只有两类,故我们接下来使用逻辑回归去拟合该数据集。

      

      1,回归方程的脚本ex2.m:

    %% Machine Learning Online Class - Exercise 2: Logistic Regression
    %
    %  Instructions
    %  ------------
    % 
    %  This file contains code that helps you get started on the logistic
    %  regression exercise. You will need to complete the following functions 
    %  in this exericse:
    %
    %     sigmoid.m
    %     costFunction.m
    %     predict.m
    %     costFunctionReg.m
    %
    %  For this exercise, you will not need to change any code in this file,
    %  or any other files other than those mentioned above.
    %
    
    %% Initialization
    clear ; close all; clc
    
    %% Load Data
    %  The first two columns contains the exam scores and the third column
    %  contains the label.
    
    data = load('ex2data1.txt');
    X = data(:, [1, 2]); y = data(:, 3);
    
    %% ==================== Part 1: Plotting ====================
    %  We start the exercise by first plotting the data to understand the 
    %  the problem we are working with.
    
    fprintf(['Plotting data with + indicating (y = 1) examples and o ' ...
             'indicating (y = 0) examples.
    ']);
    
    plotData(X, y);
    
    % Put some labels 
    hold on;
    % Labels and Legend
    xlabel('Exam 1 score')
    ylabel('Exam 2 score')
    
    % Specified in plot order
    legend('Admitted', 'Not admitted')
    hold off;
    
    fprintf('
    Program paused. Press enter to continue.
    ');
    pause;
    
    
    %% ============ Part 2: Compute Cost and Gradient ============
    %  In this part of the exercise, you will implement the cost and gradient
    %  for logistic regression. You neeed to complete the code in 
    %  costFunction.m
    
    %  Setup the data matrix appropriately, and add ones for the intercept term
    [m, n] = size(X);
    
    % Add intercept term to x and X_test
    X = [ones(m, 1) X];
    
    % Initialize fitting parameters
    initial_theta = zeros(n + 1, 1);
    
    % Compute and display initial cost and gradient
    [cost, grad] = costFunction(initial_theta, X, y);
    
    fprintf('Cost at initial theta (zeros): %f
    ', cost);
    fprintf('Expected cost (approx): 0.693
    ');
    fprintf('Gradient at initial theta (zeros): 
    ');
    fprintf(' %f 
    ', grad);
    fprintf('Expected gradients (approx):
     -0.1000
     -12.0092
     -11.2628
    ');
    
    % Compute and display cost and gradient with non-zero theta
    test_theta = [-24; 0.2; 0.2];
    [cost, grad] = costFunction(test_theta, X, y);
    
    fprintf('
    Cost at test theta: %f
    ', cost);
    fprintf('Expected cost (approx): 0.218
    ');
    fprintf('Gradient at test theta: 
    ');
    fprintf(' %f 
    ', grad);
    fprintf('Expected gradients (approx):
     0.043
     2.566
     2.647
    ');
    
    fprintf('
    Program paused. Press enter to continue.
    ');
    pause;
    
    
    %% ============= Part 3: Optimizing using fminunc  =============
    %  In this exercise, you will use a built-in function (fminunc) to find the
    %  optimal parameters theta.
    
    %  Set options for fminunc
    options = optimset('GradObj', 'on', 'MaxIter', 400);
    
    %  Run fminunc to obtain the optimal theta
    %  This function will return theta and the cost 
    [theta, cost] = ...
        fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);
    
    % Print theta to screen
    fprintf('Cost at theta found by fminunc: %f
    ', cost);
    fprintf('Expected cost (approx): 0.203
    ');
    fprintf('theta: 
    ');
    fprintf(' %f 
    ', theta);
    fprintf('Expected theta (approx):
    ');
    fprintf(' -25.161
     0.206
     0.201
    ');
    
    % Plot Boundary
    plotDecisionBoundary(theta, X, y);
    
    % Put some labels 
    hold on;
    % Labels and Legend
    xlabel('Exam 1 score')
    ylabel('Exam 2 score')
    
    % Specified in plot order
    legend('Admitted', 'Not admitted')
    hold off;
    
    fprintf('
    Program paused. Press enter to continue.
    ');
    pause;
    
    %% ============== Part 4: Predict and Accuracies ==============
    %  After learning the parameters, you'll like to use it to predict the outcomes
    %  on unseen data. In this part, you will use the logistic regression model
    %  to predict the probability that a student with score 45 on exam 1 and 
    %  score 85 on exam 2 will be admitted.
    %
    %  Furthermore, you will compute the training and test set accuracies of 
    %  our model.
    %
    %  Your task is to complete the code in predict.m
    
    %  Predict probability for a student with score 45 on exam 1 
    %  and score 85 on exam 2 
    
    prob = sigmoid([1 45 85] * theta);
    fprintf(['For a student with scores 45 and 85, we predict an admission ' ...
             'probability of %f
    '], prob);
    fprintf('Expected value: 0.775 +/- 0.002
    
    ');
    
    % Compute accuracy on our training set
    p = predict(theta, X);
    
    fprintf('Train Accuracy: %f
    ', mean(double(p == y)) * 100);
    fprintf('Expected accuracy (approx): 89.0
    ');
    fprintf('
    ');
    ex2.m

      

      2,可视化数据plotData.m:

    function plotData(X, y)
    %PLOTDATA Plots the data points X and y into a new figure 
    %   PLOTDATA(x,y) plots the data points with + for the positive examples
    %   and o for the negative examples. X is assumed to be a Mx2 matrix.
    
    % Create New Figure
    figure;   hold on;
    
    
    % ====================== YOUR CODE HERE ======================
    % Instructions: Plot the positive and negative examples on a
    %               2D plot, using the option 'k+' for the positive
    %               examples and 'ko' for the negative examples.
    %
    
       pos=find(y==1);
      neg=find(y==0);
      plot(X(pos,1),X(pos,2),'k+','LineWidth',2,'MarkerSize',7);
      plot(X(neg,1),X(neg,2),'ko','MarkerFaceColor','y','MarkerSize',7);
    
    % =========================================================================
    
    
    hold off;
    
    end
    plotData.m

      

      3,逻辑回归的逻辑函数(Sigmoid Function/Logistic Function):

      $h_{ heta}(x)=g( heta^{T}x)$ :表示在输入为$x$,预测为$y=1$的概率

      $g(z)=frac{1}{1+e^{-z}}$  

    function g = sigmoid(z)
    %SIGMOID Compute sigmoid function
    %   g = SIGMOID(z) computes the sigmoid of z.
    
    % You need to return the following variables correctly 
    g = zeros(size(z));
    
    % ====================== YOUR CODE HERE ======================
    % Instructions: Compute the sigmoid of each value of z (z can be a matrix,
    %               vector or scalar).
    
      g=1./(1+exp(-z));
      
    % =============================================================
    
    end
    sigmoid.m

     

      4,逻辑回归的代价函数:

      $J( heta)=-frac{1}{m}sum_{i=1}^{m}[y^{(i)}log(h_ heta(x^{(i)}))+(1-y^{(i)})log(1-h_{ heta}(x^{(i)}))]$

    function [J, grad] = costFunction(theta, X, y)
    %COSTFUNCTION Compute cost and gradient for logistic regression
    %   J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the
    %   parameter for logistic regression and the gradient of the cost
    %   w.r.t. to the parameters.
    
    % Initialize some useful values
    m = length(y); % number of training examples
    
    % You need to return the following variables correctly 
    J = 0;
    grad = zeros(size(theta));
    
    % ====================== YOUR CODE HERE ======================
    % Instructions: Compute the cost of a particular choice of theta.
    %               You should set J to the cost.
    %               Compute the partial derivatives and set grad to the partial
    %               derivatives of the cost w.r.t. each parameter in theta
    %
    % Note: grad should have the same dimensions as theta
    %
      
      h=sigmoid(X*theta); %求hθ(x)
      J=-sum(y.*log(h)+(1-y).*log(1-h))/m; %代价函数
      
      grad=(X')*(h-y)./m; %梯度下降,没有学习速率α,之后给我们调用内置函数fminunc使用
      
      
    ##  h=sigmoid(X*theta);
    ##J=sum(-y'*log(h)-(1-y)'*log(1-h))/m;
    ##grad=((h-y)'*X)/m;
    
    
    
    
    
    % =============================================================
    
    end
    costFunction.m

      5,带学习速率$alpha$的梯度下降:

      $ heta_j:= heta_j-frac{alpha}{m }sum_{i=1}^{m}[(h_ heta(x^{(i)})-y^{(i)})x^{(i)}_j]$

      

      不带学习速率$alpha$的梯度下降(给之后fminunc作为梯度下降使用):

      $frac{partial J( heta)}{partial heta_j}=frac{1}{m}sum_{i=1}^{m}[(h_ heta(x^{(i)})-y^{(i)})x^{(i)}_j]$

      使用内置fminunc函数来拟合参数$ heta$,之前我们是使用梯度下降来拟合参数$ heta$的,在这同样也能使用,不过我们这里使用内置fminunc函数来去拟合,它会自动选择学习速率$alpha$,不需要我们手工选择,我们只需要给定一个迭代次数,一个写好的代价函数,初始化$ heta$,最后它会为我们找到最优的$ heta$,它像可以加强版的梯度下降法。

    options = optimset('GradObj', 'on', 'MaxIter', 400);
    [theta, cost] = ...
        fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);//自己写好的costFunction函数

       

      6,根据拟合好的参数$ heta$,预测数据,例如我们想预测某学生第一次分数为45,第二次分数为85,该学生能入学的概率为:

    prob = sigmoid([1 45 85] * theta); %入学的概率

       预测样本X,我们可以看到预测的准确率为89%。

    function p = predict(theta, X)
    %PREDICT Predict whether the label is 0 or 1 using learned logistic 
    %regression parameters theta
    %   p = PREDICT(theta, X) computes the predictions for X using a 
    %   threshold at 0.5 (i.e., if sigmoid(theta'*x) >= 0.5, predict 1)
    
    m = size(X, 1); % Number of training examples
    
    % You need to return the following variables correctly
    p = zeros(m, 1);
    
    % ====================== YOUR CODE HERE ======================
    % Instructions: Complete the following code to make predictions using
    %               your learned logistic regression parameters. 
    %               You should set p to a vector of 0's and 1's
    %
      
      %第一种
      for i=1:m
        p(i,1)=sigmoid(X(i,:)*theta)>=0.5; %预测每一个样本的结果,大于0.5为正向类
      end;
      
      %第二种
      %
    ##  ans=sigmoid(X*theta);
    ##  for i=1:m
    ##      if(ans(i,1)>=0.5)
    ##        p(i,1)=1;
    ##      else
    ##        p(i,1)=0;
    ##  end
    
    
    
    
    % =========================================================================
    
    
    end
    predict.m

    二:正则化逻辑回归(Regularized logistic regression):

      背景:假如你是某所工厂的管理员,该工厂生产芯片,每片芯片要经过两次测试后,达到标准方可通过,现在有一组以前的数据集ex2data2.txt,第一列为第一次测试的结果,第二列为第二次测试的结果,第三列1表示该芯片合格,0表示不合格。现在要你通过这些数据,拟合出一个模型,这个模型将作为以后判断芯片是否合格的标准。

      

      我们通过可视化这些数据集,发现其与某条复杂的曲线方程有关,而数据集只有两个特征$x_1$和$x_2$,显然是拟合不出曲线,那么我们可以通过原本的两个特征创造出更多的特征,将原本的特征映射为6次幂,这样我们就得到了28维的特征向量。当特征多了的话,很可能会出现过拟合,显然这不是我们想要的(即是它能很好的拟合原训练集,但预测新样本的能力会很低)。

    构造更多的特征:

    function out = mapFeature(X1, X2)
    % MAPFEATURE Feature mapping function to polynomial features
    %
    %   MAPFEATURE(X1, X2) maps the two input features
    %   to quadratic features used in the regularization exercise.
    %
    %   Returns a new feature array with more features, comprising of 
    %   X1, X2, X1.^2, X2.^2, X1*X2, X1*X2.^2, etc..
    %
    %   Inputs X1, X2 must be the same size
    %
    
    degree = 6;
    out = ones(size(X1(:,1)));
    for i = 1:degree
        for j = 0:i
            out(:, end+1) = (X1.^(i-j)).*(X2.^j);
        end
    end
    
    end
    mapFeature.m

     

    所以这时我们使用正则化(Regularization)来解决过拟合的问题。

      1,正则化回归的脚本ex2.m: 

    %% Machine Learning Online Class - Exercise 2: Logistic Regression
    %
    %  Instructions
    %  ------------
    %
    %  This file contains code that helps you get started on the second part
    %  of the exercise which covers regularization with logistic regression.
    %
    %  You will need to complete the following functions in this exericse:
    %
    %     sigmoid.m
    %     costFunction.m
    %     predict.m
    %     costFunctionReg.m
    %
    %  For this exercise, you will not need to change any code in this file,
    %  or any other files other than those mentioned above.
    %
    
    %% Initialization
    clear ; close all; clc
    
    %% Load Data
    %  The first two columns contains the X values and the third column
    %  contains the label (y).
    
    data = load('ex2data2.txt');
    X = data(:, [1, 2]); y = data(:, 3);
    
    plotData(X, y);
    
    % Put some labels
    hold on;
    
    % Labels and Legend
    xlabel('Microchip Test 1')
    ylabel('Microchip Test 2')
    
    % Specified in plot order
    legend('y = 1', 'y = 0')
    hold off;
    
    
    %% =========== Part 1: Regularized Logistic Regression ============
    %  In this part, you are given a dataset with data points that are not
    %  linearly separable. However, you would still like to use logistic
    %  regression to classify the data points.
    %
    %  To do so, you introduce more features to use -- in particular, you add
    %  polynomial features to our data matrix (similar to polynomial
    %  regression).
    %
    
    % Add Polynomial Features
    
    % Note that mapFeature also adds a column of ones for us, so the intercept
    % term is handled
    X = mapFeature(X(:,1), X(:,2)); %c从原来的二维变成了28(27+1截距项)维,m*28
    
    % Initialize fitting parameters
    initial_theta = zeros(size(X, 2), 1);
    
    % Set regularization parameter lambda to 1
    lambda = 1;
    
    % Compute and display initial cost and gradient for regularized logistic
    % regression
    [cost, grad] = costFunctionReg(initial_theta, X, y, lambda);
    
    fprintf('Cost at initial theta (zeros): %f
    ', cost);
    fprintf('Expected cost (approx): 0.693
    ');
    fprintf('Gradient at initial theta (zeros) - first five values only:
    ');
    fprintf(' %f 
    ', grad(1:5));
    fprintf('Expected gradients (approx) - first five values only:
    ');
    fprintf(' 0.0085
     0.0188
     0.0001
     0.0503
     0.0115
    ');
    
    fprintf('
    Program paused. Press enter to continue.
    ');
    pause;
    
    % Compute and display cost and gradient
    % with all-ones theta and lambda = 10
    test_theta = ones(size(X,2),1);
    [cost, grad] = costFunctionReg(test_theta, X, y, 10);
    
    fprintf('
    Cost at test theta (with lambda = 10): %f
    ', cost);
    fprintf('Expected cost (approx): 3.16
    ');
    fprintf('Gradient at test theta - first five values only:
    ');
    fprintf(' %f 
    ', grad(1:5));
    fprintf('Expected gradients (approx) - first five values only:
    ');
    fprintf(' 0.3460
     0.1614
     0.1948
     0.2269
     0.0922
    ');
    
    fprintf('
    Program paused. Press enter to continue.
    ');
    pause;
    
    %% ============= Part 2: Regularization and Accuracies =============
    %  Optional Exercise:
    %  In this part, you will get to try different values of lambda and
    %  see how regularization affects the decision coundart
    %
    %  Try the following values of lambda (0, 1, 10, 100).
    %
    %  How does the decision boundary change when you vary lambda? How does
    %  the training set accuracy vary?
    %
    
    % Initialize fitting parameters
    initial_theta = zeros(size(X, 2), 1);
    
    % Set regularization parameter lambda to 1 (you should vary this)
    lambda = 1;
    
    % Set Options
    options = optimset('GradObj', 'on', 'MaxIter', 400);
    
    % Optimize
    [theta, J, exit_flag] = ...
        fminunc(@(t)(costFunctionReg(t, X, y, lambda)), initial_theta, options);
    
    % Plot Boundary
    plotDecisionBoundary(theta, X, y);
    hold on;
    title(sprintf('lambda = %g', lambda))
    
    % Labels and Legend
    xlabel('Microchip Test 1')
    ylabel('Microchip Test 2')
    
    legend('y = 1', 'y = 0', 'Decision boundary')
    hold off;
    
    % Compute accuracy on our training set
    p = predict(theta, X);
    
    fprintf('Train Accuracy: %f
    ', mean(double(p == y)) * 100);
    fprintf('Expected accuracy (with lambda = 1): 83.1 (approx)
    ');
    ex2_reg.m

      2,正则化逻辑回归代价函数(忽略偏差项$ heta_0$的正则化):

      $J( heta)=-frac{1}{m}sum_{i=1}^{m}[y^{(i)}log(h_ heta(x^{(i)}))+(1-y^{(i)})log(1-h_{ heta}(x^{(i)}))]+frac{lambda }{2m}sum_{j=1}^{n} heta_j^{2}$

       

      3,梯度下降:

      带学习速率:

        $ heta_0:= heta_0-alpha frac{1}{m }sum_{i=1}^{m}[(h_ heta(x^{(i)})-y^{(i)})x^{(i)}_0]$   for $j=0$

        $ heta_j:= heta_j-alpha (frac{1}{m }sum_{i=1}^{m}[(h_ heta(x^{(i)})-y^{(i)})x^{(i)}_j]+frac{lambda }{m} heta_j)$  for $jgeq 1$

      不带学习速率(给之后fminunc作为梯度下降使用):

        $frac{partial J( heta)}{partial heta_0}=frac{1}{m}sum_{i=1}^{m}[(h_ heta(x^{(i)})-y^{(i)})x^{(i)}_0]$  for $j=0$

        $frac{partial J( heta)}{partial heta_j}=(frac{1}{m}sum_{i=1}^{m}[(h_ heta(x^{(i)})-y^{(i)})x^{(i)}_j])+frac{lambda }{m} heta_j $ for $jgeq 1$

      

    function [J, grad] = costFunctionReg(theta, X, y, lambda)
    %COSTFUNCTIONREG Compute cost and gradient for logistic regression with regularization
    %   J = COSTFUNCTIONREG(theta, X, y, lambda) computes the cost of using
    %   theta as the parameter for regularized logistic regression and the
    %   gradient of the cost w.r.t. to the parameters. 
    
    % Initialize some useful values
    m = length(y); % number of training examples
    
    % You need to return the following variables correctly 
    J = 0;
    grad = zeros(size(theta));
    
    % ====================== YOUR CODE HERE ======================
    % Instructions: Compute the cost of a particular choice of theta.
    %               You should set J to the cost.
    %               Compute the partial derivatives and set grad to the partial
    %               derivatives of the cost w.r.t. each parameter in theta
    
    
      h=sigmoid(X*theta);
      n=size(X,2);
      J=(-(y')*log(h)-(1-y)'*log(1-h))/m+(lambda/(2*m))*sum(theta([2:n],:).^2); %忽略偏差项θ(0)的影响
    
      grad(1,1)=((X(:,1)')*(h-y))/m; %梯度下降
      grad([2:n],:)=(X(:,[2:n])')*(h-y)./m+(theta([2:n],:)).*(lambda/m);
    
    
    ##h=sigmoid(X*theta);
    ##theta(1,1)=0;
    ##J=sum(-y'*log(h)-(1-y)'*log(1-h))/m+lambda/2/m*sum(power(theta,2));
    ##grad=((h-y)'*X)/m+lambda/m*theta';
    % =============================================================
    
    end
    costFunctionReg.m

      我们可以选择不同的$lambda$大小去拟合数据集并可视化,选择一个较优的$lambda$。

       4,预测方法跟逻辑回归差不多,只是现在加入要预测第一次分数为45,第二次分数为80时,要先将这两个特征放到mapFeature函数构造。

     我的标签:做个有情怀的程序员。

  • 相关阅读:
    防删没什么意思啊,直接写废你~
    绝大多数情况下,没有解决不了的问题,只有因为平时缺少练习而惧怕问题的复杂度,畏惧的心理让我们选择避让,采取并不那么好的方案去解决问题
    Java 模拟面试题
    Crossthread operation not valid: Control 'progressBar1' accessed from a thread other than the thread it was created on
    一步步从数据库备份恢复SharePoint Portal Server 2003
    【转】理解 JavaScript 闭包
    Just For Fun
    The database schema is too old to perform this operation in this SharePoint cluster. Please upgrade the database and...
    Hello World!
    使用filter筛选刚体碰撞
  • 原文地址:https://www.cnblogs.com/-jiandong/p/11881503.html
Copyright © 2011-2022 走看看