zoukankan      html  css  js  c++  java
  • 神经网络解决多分类问题例:数字识别

    神经网络解决多分类问题例:数字识别

    1. 观察样本(Visualizing the data)

    训练集提供5000张数字图片,每张图片为20x20像素,并被转化成1x400的向量存储。样本输入为5000x400的矩阵,输出为5000x1的向量。coursera提供了将灰度值转化为图片的函数,但这对我们解决问题没有实质性的帮助。

    2. 设计神经网络(Designing Nural Network)

    由于每一个样本输入为1x400的向量,因此输入神经元应有400个。我们预测的数字共有10个,因此输出神经元应有10个。由于问题并不复杂,只需使用1隐层即可(早期的自动驾驶试验所使用的神经网络为3隐层),隐层中的神经元个数定为25个。

    定义三个变量存储各层的神经元个数,方便后续调用。

    input_layer_size = 400;
    hidden_layer_size = 25;
    output_layer_size = 10;
    

    3. 编写代价函数计算函数(nnCostFunction)

    3.1 实现必备的工具函数

    在BP神经网络的实现过程中,有几个操作是经常使用的。如果不将其封装成函数很容易写着写着就把自己绕晕。

    • sigmoid函数

      function g = sigmoid(z)
      	g = 1 ./ (1+exp(-z));
      end
      
    • addBias函数

      由于我们在设计神经网络时没有考虑偏置神经元,而在前向传播的时候,计算下一层的输入值必须用到偏置神经元的参数( heta_0),因此在这里封装成函数。

      function ans = addBias(X)
      	ans = [ones(size(X,1)),X];
      end
      
    • oneHot函数

      one-hot-encoding译作独热编码。它用于将m*1的样本输出y转化为m*output_layer_size的矩阵。对于每一行,有output_layer_size个数,分别对应各个输出神经元是0还是1。

      function ans = oneHot(y,output_layer_size)
      	ans = zeros(size(y,1),output_layer_size);
      	for i = 1:size(y,1)
      		ans(i,y(i)) = 1;
      	end
      end
      

    3.2 规范化矩阵定义

    在神经网络的实现过程中,矩阵维度的错误可能是初学者遇见最频繁的一个问题。而每一次发现矩阵维度错误时,往往又需要从输入层开始一步步推导出正确的矩阵维度,重新修改代码中矩阵的计算方式、计算顺序。出现这种情况,往往是编写代码时逻辑混乱,一下子把这个矩阵写出m*n,一下子把那个矩阵写成n*m。如果能够在编写代码前事先设计好各个矩阵的表示方法,规范化行、列的实际意义,就能在发现错误后快速改正,或者直接避免错误。

    在这里我们沿用coursera的讲义中的矩阵定义的规范

    矩阵名称 常用代号 矩阵规模 行意义 列意义
    输入值矩阵 (X,z^{(i)}) m * n 样本个数 该层神经元个数(属性个数)
    参数矩阵 (Theta_{(j)}) (a+1) * b 该层神经元个数(这一层的激活函数的个数) 上一层神经元个数+1(每个激活函数的属性个数+偏置属性)
    输出值矩阵 (Y,a^{(i)}) m * k 样本个数 该层神经元个数(输出值个数)

    另外简记: input_layer_size = n, hidden_layer_size = l, output_layer_size = K

    3.3 实现前向传播(Feedforward )

    前向传播是计算各个输出神经元的输出值的过程。它可以向量化计算。

    截屏2020-09-20 上午12.13.43

    将神经网络的前向传播定义在 nnCostFunction 函数中

    传入参数[ nn_params(所有参数值( heta)的展开向量) , input_layer_size, hidden_layer_size, output_layer_size , X, y, lambda ]

    function [J grad] = nnCostFunction(nn_params, ...
                                       input_layer_size, ...
                                       hidden_layer_size, ...
                                       output_layer_size, ...
                                       X, y, lambda)
    % 使用reshape函数将向量nn_params重新构造成Theta1,Theta2两个矩阵。注意,Theta1,Theta2两个矩阵
    % 都是考虑了偏置神经元的。
    Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
                     hidden_layer_size, (input_layer_size + 1));
    
    Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
                     output_layer_size, (hidden_layer_size + 1));
    % 样本个数
    m = size(X, 1);
    % 代价值
    J = 0;
    % 梯度值
    Theta1_grad = zeros(size(Theta1));
    Theta2_grad = zeros(size(Theta2));
    
    % 初始化完成
    % ==============================================================================
    % ==============================================================================
    % Part1: 实现前向传播
    
    % 前向传播
    z2 = addBias(X) * Theta1';
    a2 = sigmoid(z2);
    z3 = addBias(a2) * Theta2';
    a3 = sigmoid(z3);
    % 独热编码
    encodeY = oneHot(y,output_layer_size);
    
    % tempTheta 用于计算正则项,将偏置神经元对应的theta全部置0
    tempTheta2 = Theta2;
    tempTheta2(:,1) = 0;
    tempTheta1 = Theta1;
    tempTheta1(:,1) = 0;
    
    J = 1/m * sum(sum((-encodeY .* log(a3) - (1-encodeY) .* log(1-a3)))) + 1/(2*m) * lambda * (sum(sum(tempTheta1 .^2)) + sum(sum(tempTheta2 .^ 2)) );
    

    代价函数计算公式

    [J( heta) = frac{1}{m}sum_{i=1}^{m}sum_{k=1}^{K}[-y_k^{(i)}log{(h_ heta(x^{(i)})_k)} - (1-y_k^{(i)})log{(1-h_ heta(x^{(i)})_k)}] + frac{lambda}{2m}[sum_{i=1}^lsum_{j=1}^nTheta_{ij}^2 + sum_{i=1}^Ksum_{j=1}^lTheta_{ij}^2] ]

    神经网络的学习能力过强,可以拟合高度复杂的模型。因此如果不加以正则化很可能会过拟合。此时经验误差很小而泛化误差不够小

    3.4 实现反向传播(Backpropagation)

    这一节是反向传播算法的核心部分,也是最难懂、写代码时最复杂的一部分。初学时肯定很头疼,我在这部分卡了3天没能理解。

    这一部分设计链式法则和矩阵求导,其中链式法则必须会,矩阵求导如果不会可以先通过矩阵的维度来推导结果(推导得相对慢一些)。

    推荐b站一个视频,详细的讲了反向传播算法的梯度值计算。她还发过吴恩达神经网络解数字识别的Python实现,我没看不过总体思路肯定和matlab实现类似,可以作为理解反向传播算法的一个讲解视频。

    手把手教大家实现吴恩达深度学习作业第二周06-反向传播推导

    反向传播算法用于解决梯度计算。通过链式法则与矩阵求推导出各个参数( heta)的梯度

    Theta2_grad = 1/m*(a3-encodeY)' * addBias(a2) + lambda * tempTheta2/m;
    Theta1_grad = 1/m*(Theta2(:,2:end)' * (a3-encodeY)' .* a2' .* (1-a2') * addBias(X)) + lambda * tempTheta1 / m;
    % Unroll gradients
    grad = [Theta1_grad(:) ; Theta2_grad(:)];
    

    4. 高级最优化训练神经网络(Learning parameters using fmincg)

    4.1 随机初始化参数

    function W = randInitializeWeights(L_in, L_out)
    
    W = zeros(L_out, 1 + L_in);
    
    % Randomly initialize the weights to small values
    epsilon_init = 0.12;
    W = rand(L_out, 1 + L_in) * 2 * epsilon_init - epsilon_init;
    
    end
    

    随机初始化参数对于神经网络的重要性不必多提,属于基础知识,不明白的可以重新看一看吴恩达的那节课。

    initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
    initial_Theta2 = randInitializeWeights(hidden_layer_size, output_layer_size);
    % 展开成向量,便于传递参数
    initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)];
    

    4.2 调用高级最优化函数训练神经网络

    这是coursera给出的高级最优化函数 fmincg。使用前先学习用法,如果不会用就用fminunc,略慢一些。

    function [X, fX, i] = fmincg(f, X, options, P1, P2, P3, P4, P5)
    % Minimize a continuous differentialble multivariate function. Starting point
    % is given by "X" (D by 1), and the function named in the string "f", must
    % return a function value and a vector of partial derivatives. The Polack-
    % Ribiere flavour of conjugate gradients is used to compute search directions,
    % and a line search using quadratic and cubic polynomial approximations and the
    % Wolfe-Powell stopping criteria is used together with the slope ratio method
    % for guessing initial step sizes. Additionally a bunch of checks are made to
    % make sure that exploration is taking place and that extrapolation will not
    % be unboundedly large. The "length" gives the length of the run: if it is
    % positive, it gives the maximum number of line searches, if negative its
    % absolute gives the maximum allowed number of function evaluations. You can
    % (optionally) give "length" a second component, which will indicate the
    % reduction in function value to be expected in the first line-search (defaults
    % to 1.0). The function returns when either its length is up, or if no further
    % progress can be made (ie, we are at a minimum, or so close that due to
    % numerical problems, we cannot get any closer). If the function terminates
    % within a few iterations, it could be an indication that the function value
    % and derivatives are not consistent (ie, there may be a bug in the
    % implementation of your "f" function). The function returns the found
    % solution "X", a vector of function values "fX" indicating the progress made
    % and "i" the number of iterations (line searches or function evaluations,
    % depending on the sign of "length") used.
    %
    % Usage: [X, fX, i] = fmincg(f, X, options, P1, P2, P3, P4, P5)
    %
    % See also: checkgrad 
    %
    % Copyright (C) 2001 and 2002 by Carl Edward Rasmussen. Date 2002-02-13
    %
    %
    % (C) Copyright 1999, 2000 & 2001, Carl Edward Rasmussen
    % 
    % Permission is granted for anyone to copy, use, or modify these
    % programs and accompanying documents for purposes of research or
    % education, provided this copyright notice is retained, and note is
    % made of any changes that have been made.
    % 
    % These programs and documents are distributed without any warranty,
    % express or implied.  As the programs were written for research
    % purposes only, they have not been tested to the degree that would be
    % advisable in any important application.  All use of these programs is
    % entirely at the user's own risk.
    %
    % [ml-class] Changes Made:
    % 1) Function name and argument specifications
    % 2) Output display
    %
    
    % Read options
    if exist('options', 'var') && ~isempty(options) && isfield(options, 'MaxIter')
        length = options.MaxIter;
    else
        length = 100;
    end
    
    
    RHO = 0.01;                            % a bunch of constants for line searches
    SIG = 0.5;       % RHO and SIG are the constants in the Wolfe-Powell conditions
    INT = 0.1;    % don't reevaluate within 0.1 of the limit of the current bracket
    EXT = 3.0;                    % extrapolate maximum 3 times the current bracket
    MAX = 20;                         % max 20 function evaluations per line search
    RATIO = 100;                                      % maximum allowed slope ratio
    
    argstr = ['feval(f, X'];                      % compose string used to call function
    for i = 1:(nargin - 3)
      argstr = [argstr, ',P', int2str(i)];
    end
    argstr = [argstr, ')'];
    
    if max(size(length)) == 2, red=length(2); length=length(1); else red=1; end
    S=['Iteration '];
    
    i = 0;                                            % zero the run length counter
    ls_failed = 0;                             % no previous line search has failed
    fX = [];
    [f1 df1] = eval(argstr);                      % get function value and gradient
    i = i + (length<0);                                            % count epochs?!
    s = -df1;                                        % search direction is steepest
    d1 = -s'*s;                                                 % this is the slope
    z1 = red/(1-d1);                                  % initial step is red/(|s|+1)
    
    while i < abs(length)                                      % while not finished
      i = i + (length>0);                                      % count iterations?!
    
      X0 = X; f0 = f1; df0 = df1;                   % make a copy of current values
      X = X + z1*s;                                             % begin line search
      [f2 df2] = eval(argstr);
      i = i + (length<0);                                          % count epochs?!
      d2 = df2'*s;
      f3 = f1; d3 = d1; z3 = -z1;             % initialize point 3 equal to point 1
      if length>0, M = MAX; else M = min(MAX, -length-i); end
      success = 0; limit = -1;                     % initialize quanteties
      while 1
        while ((f2 > f1+z1*RHO*d1) || (d2 > -SIG*d1)) && (M > 0) 
          limit = z1;                                         % tighten the bracket
          if f2 > f1
            z2 = z3 - (0.5*d3*z3*z3)/(d3*z3+f2-f3);                 % quadratic fit
          else
            A = 6*(f2-f3)/z3+3*(d2+d3);                                 % cubic fit
            B = 3*(f3-f2)-z3*(d3+2*d2);
            z2 = (sqrt(B*B-A*d2*z3*z3)-B)/A;       % numerical error possible - ok!
          end
          if isnan(z2) || isinf(z2)
            z2 = z3/2;                  % if we had a numerical problem then bisect
          end
          z2 = max(min(z2, INT*z3),(1-INT)*z3);  % don't accept too close to limits
          z1 = z1 + z2;                                           % update the step
          X = X + z2*s;
          [f2 df2] = eval(argstr);
          M = M - 1; i = i + (length<0);                           % count epochs?!
          d2 = df2'*s;
          z3 = z3-z2;                    % z3 is now relative to the location of z2
        end
        if f2 > f1+z1*RHO*d1 || d2 > -SIG*d1
          break;                                                % this is a failure
        elseif d2 > SIG*d1
          success = 1; break;                                             % success
        elseif M == 0
          break;                                                          % failure
        end
        A = 6*(f2-f3)/z3+3*(d2+d3);                      % make cubic extrapolation
        B = 3*(f3-f2)-z3*(d3+2*d2);
        z2 = -d2*z3*z3/(B+sqrt(B*B-A*d2*z3*z3));        % num. error possible - ok!
        if ~isreal(z2) || isnan(z2) || isinf(z2) || z2 < 0 % num prob or wrong sign?
          if limit < -0.5                               % if we have no upper limit
            z2 = z1 * (EXT-1);                 % the extrapolate the maximum amount
          else
            z2 = (limit-z1)/2;                                   % otherwise bisect
          end
        elseif (limit > -0.5) && (z2+z1 > limit)         % extraplation beyond max?
          z2 = (limit-z1)/2;                                               % bisect
        elseif (limit < -0.5) && (z2+z1 > z1*EXT)       % extrapolation beyond limit
          z2 = z1*(EXT-1.0);                           % set to extrapolation limit
        elseif z2 < -z3*INT
          z2 = -z3*INT;
        elseif (limit > -0.5) && (z2 < (limit-z1)*(1.0-INT))  % too close to limit?
          z2 = (limit-z1)*(1.0-INT);
        end
        f3 = f2; d3 = d2; z3 = -z2;                  % set point 3 equal to point 2
        z1 = z1 + z2; X = X + z2*s;                      % update current estimates
        [f2 df2] = eval(argstr);
        M = M - 1; i = i + (length<0);                             % count epochs?!
        d2 = df2'*s;
      end                                                      % end of line search
    
      if success                                         % if line search succeeded
        f1 = f2; fX = [fX' f1]';
        fprintf('%s %4i | Cost: %4.6e
    ', S, i, f1);
        s = (df2'*df2-df1'*df2)/(df1'*df1)*s - df2;      % Polack-Ribiere direction
        tmp = df1; df1 = df2; df2 = tmp;                         % swap derivatives
        d2 = df1'*s;
        if d2 > 0                                      % new slope must be negative
          s = -df1;                              % otherwise use steepest direction
          d2 = -s'*s;    
        end
        z1 = z1 * min(RATIO, d1/(d2-realmin));          % slope ratio but max RATIO
        d1 = d2;
        ls_failed = 0;                              % this line search did not fail
      else
        X = X0; f1 = f0; df1 = df0;  % restore point from before failed line search
        if ls_failed || i > abs(length)          % line search failed twice in a row
          break;                             % or we ran out of time, so we give up
        end
        tmp = df1; df1 = df2; df2 = tmp;                         % swap derivatives
        s = -df1;                                                    % try steepest
        d1 = -s'*s;
        z1 = 1/(1-d1);                     
        ls_failed = 1;                                    % this line search failed
      end
      if exist('OCTAVE_VERSION')
        fflush(stdout);
      end
    end
    fprintf('
    ');
    
    

    调用部分

    options = optimset('MaxIter', 50);
    
    %  You should also try different values of lambda
    lambda = 1;
    
    % Create "short hand" for the cost function to be minimized
    costFunction = @(p) nnCostFunction(p, ...
                                       input_layer_size, ...
                                       hidden_layer_size, ...
                                       output_layer_size, X, y, lambda);
    
    % Now, costFunction is a function that takes in only one argument (the
    % neural network parameters)
    [nn_params, cost] = fmincg(costFunction, initial_nn_params, options);
    
    % Obtain Theta1 and Theta2 back from nn_params
    Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
                     hidden_layer_size, (input_layer_size + 1));
    
    Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
                     output_layer_size, (hidden_layer_size + 1));
    

    5.计算经验误差 / 调整参数 / 模型评估(Trying out different learning settings)

    function p = predict(Theta1, Theta2, X)
    %PREDICT Predict the label of an input given a trained neural network
    %   p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the
    %   trained weights of a neural network (Theta1, Theta2)
    
    % Useful values
    m = size(X, 1);
    num_labels = size(Theta2, 1);
    
    % You need to return the following variables correctly 
    p = zeros(size(X, 1), 1);
    
    h1 = sigmoid([ones(m, 1) X] * Theta1');
    h2 = sigmoid([ones(m, 1) h1] * Theta2');
    [dummy, p] = max(h2, [], 2);
    
    % =========================================================================
    end
    

    调用函数

    pred = predict(Theta1, Theta2, X);
    fprintf('
    Training Set Accuracy: %f
    ', mean(double(pred == y)) * 100);
    

    调整参数,可以观察经验误差的变化

    (lambda) (Max Iterations) (accuracy)
    1 50 94.34% ~ 96.00%
    1 100 98.64%
    1.5 1000 99.04%
    1.5 5000(折磨电脑) 99.26%
    2 2000 98.68%

    这些都是经验误差,而我们的目的是减小泛化误差。然而coursera没有给出测试集,我们暂时不去测泛化误差了。严格来说还应该有泛化误差的测量部分。

    6. 观察隐层神经元(Visualizing the hidden layer)

    这一部分显得没有那么重要,我们可以看看隐层神经元到底在干些什么。

    对于每一个隐层神经元,找到一组输入vector[1,400]使之激活值接近1(此时表示其极大可能性为某一种状态,而此时其余神经元均接近于0)。然后将其转化成20x20像素图像。

    fprintf('
    Visualizing Neural Network... 
    ')
    
    displayData(Theta1(:, 2:end));
    
    fprintf('
    Program paused. Press enter to continue.
    ');
    pause;
    
    截屏2020-09-20 上午9.09.10
    ---- suffer now and live the rest of your life as a champion ----
  • 相关阅读:
    三种方法使HTML单页面输入密码才能访问
    JAVA知识汇总
    session 一致性的解决方案
    Debian 9 Stretch国内常用镜像源
    Java开发工具推荐
    Centos配置vsftpd
    [转]php实时输出内容
    php javascript comet
    使用安装 php-memcache-client
    [转]网页实时聊天之js和jQuery实现ajax长轮询 PHP
  • 原文地址:https://www.cnblogs.com/popodynasty/p/13698756.html
Copyright © 2011-2022 走看看