zoukankan      html  css  js  c++  java
  • UFLDL教程笔记及练习答案五(自编码线性解码器与处理大型图像**卷积与池化)

    自己主动编码线性解码器

    自己主动编码线性解码器主要是考虑到稀疏自己主动编码器最后一层输出假设用sigmoid函数。因为稀疏自己主动编码器学习是的输出等于输入。simoid函数的值域在[0,1]之间,这就要求输入也必须在[0,1]之间。这是对输入特征的隐藏限制。为了解除这一限制,我们能够使最后一层用线性函数及a = z

    习题答案:

    SparseAutoEncoderLinerCost.m

    function [cost,grad,features] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...
                                                                lambda, sparsityParam, beta, data)
    % -------------------- YOUR CODE HERE --------------------
    % Instructions:
    %   Copy sparseAutoencoderCost in sparseAutoencoderCost.m from your
    %   earlier exercise onto this file, renaming the function to
    %   sparseAutoencoderLinearCost, and changing the autoencoder to use a
    %   linear decoder.
    % -------------------- YOUR CODE HERE --------------------      
    
    % visibleSize: the number of input units (probably 64) 
    % hiddenSize: the number of hidden units (probably 25) 
    % lambda: weight decay parameter
    % sparsityParam: The desired average activation for the hidden units (denoted in the lecture
    %                           notes by the greek alphabet rho, which looks like a lower-case "p").
    % beta: weight of sparsity penalty term
    % data: Our 64x10000 matrix containing the training data.  So, data(:,i) is the i-th training example. 
      
    % The input theta is a vector (because minFunc expects the parameters to be a vector). 
    % We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this 
    % follows the notation convention of the lecture notes. 
    
    W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);    %W1为25*64
    W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize);  % W2为64*25
    b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);     % b1为25维
    b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end);              %b2为64维
    
    % Cost and gradient variables (your code needs to compute these values). 
    % Here, we initialize them to zeros. 
    cost = 0;
    W1grad = zeros(size(W1));      %W1grad 为25*64
    W2grad = zeros(size(W2));     %W2grad为64*25
    b1grad = zeros(size(b1));      % 25   hidden
    b2grad = zeros(size(b2));      %64   visible
    
    %% ---------- YOUR CODE HERE --------------------------------------
    %  Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder,
    %                and the corresponding gradients W1grad, W2grad, b1grad, b2grad.
    %
    % W1grad, W2grad, b1grad and b2grad should be computed using backpropagation.
    % Note that W1grad has the same dimensions as W1, b1grad has the same dimensions
    % as b1, etc.  Your code should set W1grad to be the partial derivative of J_sparse(W,b) with
    % respect to W1.  I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b) 
    % with respect to the input parameter W1(i,j).  Thus, W1grad should be equal to the term 
    % [(1/m) Delta W^{(1)} + lambda W^{(1)}] in the last block of pseudo-code in Section 2.2 
    % of the lecture notes (and similarly for W2grad, b1grad, b2grad).
    % 
    % Stated differently, if we were using batch gradient descent to optimize the parameters,
    % the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2. 
    % 
    
    
    %1.forward propagation
    data_size=size(data);           % [64, 10000]
    active_value2=repmat(b1,1,data_size(2));    % 将b1扩展10000列 25*10000
    active_value3=repmat(b2,1,data_size(2));    % 将b2扩展10000列 64*10000
    active_value2=sigmoid(W1*data+active_value2);  %隐结点的值 矩阵表示全部的样本     25*10000 一列表示一个样本 hidden 
    active_value3=W2*active_value2+active_value3;   %输出结点的值 矩阵表示全部的样本  64*10000 一列表示一个样本 output
    %2.computing error term and cost
    ave_square=sum(sum((active_value3-data).^2)./2)/data_size(2);   %cost第一项  最小平方和
    weight_decay=lambda/2*(sum(sum(W1.^2))+sum(sum(W2.^2)));         %cost第二项   全部參数的平方和 贝叶斯学派
    
    p_real=sum(active_value2,2)./data_size(2);       % 稀疏惩处项中的预计p 为25维 
    p_para=repmat(sparsityParam,hiddenSize,1);       %稀疏化參数
    sparsity=beta.*sum(p_para.*log(p_para./p_real)+(1-p_para).*log((1-p_para)./(1-p_real)));   %KL diversion
    cost=ave_square+weight_decay+sparsity;      % 终于的cost function
    
    delta3=(active_value3-data);      % 为error 是64*10000 矩阵表示全部的样本,每一列表示一个样本
    average_sparsity=repmat(sum(active_value2,2)./data_size(2),1,data_size(2));  %求error中的稀疏项
    default_sparsity=repmat(sparsityParam,hiddenSize,data_size(2));     %稀疏化參数
    sparsity_penalty=beta.*(-(default_sparsity./average_sparsity)+((1-default_sparsity)./(1-average_sparsity)));
    delta2=(W2'*delta3+sparsity_penalty).*((active_value2).*(1-active_value2));  %error 是25*10000 矩阵表示全部的样本,每一列表示一个样本
    %3.backword propagation
    W2grad=delta3*active_value2'./data_size(2)+lambda.*W2;      % 梯度 为64*25
    W1grad=delta2*data'./data_size(2)+lambda.*W1;          %梯度 为25*64
    b2grad=sum(delta3,2)./data_size(2);           %64   visible
    b1grad=sum(delta2,2)./data_size(2);          % 25   hidden
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    %-------------------------------------------------------------------
    % After computing the cost and gradient, we will convert the gradients back
    % to a vector format (suitable for minFunc).  Specifically, we will unroll
    % your gradient matrices into a vector.
    
    grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)];                              
    
    end
    
    %-------------------------------------------------------------------
    % Here's an implementation of the sigmoid function, which you may find useful
    % in your computation of the costs and the gradients.  This inputs a (row or
    % column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). 
    
    function sigm = sigmoid(x)
      
        sigm = 1 ./ (1 + exp(-x));
    end
    


    处理大型图像

    处理大型图像主要採用的是卷积和池化。卷积来源于自然图像有其固有特性,也就是说,图像的一部分的统计特性与其它部分是一样的。

    这也意味着我们在这一部分学习的特征也能用在还有一部分上,所以对于这个图像上的全部位置,我们都能使用相同的学习特征。过程是首先须要用无标签的数据对图像进行训练得到一个稀疏自编码器。这种參数就是hidden*inputlayer 的矩阵。对每个1*inputlayer的參数w与大图像做卷积。

    卷积的计算过程是,该层每一个feature map的卷积核与输入图像的三通道做卷积。然后结果相加,再加上偏置參数,再取sigmoid函数,结果作为该feature map。

    池化主要考虑卷积得到的特征过多易产生过拟合。然图像具有一种“静态性”的属性,意味着在一个图像区域实用的特征极有可能在另外一个区域相同适用,因此我们能够对不同位置的特征进行聚合统计(平均池化和最大池化)。

    池化的计算过程,为在上一层feature map的p*q区域内取均值或者最大值。

    对于一副m*n大小图像,设k为隐结点的数量,a*b为输入结点数量,那么通过卷积后会得到k*(m-a+1)*(n-b+1)维的特征向量。设[p,q]为pooling窗体的大小,那么pooling后特征维数就为k*(m-a+1)/p *(n-b+1)/q


    训练:卷积神经网络的训练能够採用BP,用到的都是有监督学习。公式推导见这篇blog:http://blog.csdn.net/lu597203933/article/details/46575871。。

    这里提供了第二种思路,在线性解码器的练习中。用8*8的小图片(大图片上随机裁剪的)训练稀疏自编码器,当中有400个隐藏层,针对于大图片,400就相当于feature map的个数,隐藏层的每一个结点參数(1*192 = 8*8*3, 3为通道数)就相应一个卷积核。

    然后将训练得到的卷积核用于大图片上(64*64*3).

    习题答案


    cnnConvolve.m

    function convolvedFeatures = cnnConvolve(patchDim, numFeatures, images, W, b, ZCAWhite, meanPatch)  % patcheDim =8  numFeatures = hidden images
    %cnnConvolve Returns the convolution of the features given by W and b with
    %the given images
    %
    % Parameters:
    %  patchDim - patch (feature) dimension
    %  numFeatures - number of features
    %  images - large images to convolve with, matrix in the form
    %           images(r, c, channel, image number)
    %  W, b - W, b for features from the sparse autoencoder
    %  ZCAWhite, meanPatch - ZCAWhitening and meanPatch matrices used for
    %                        preprocessing
    %
    % Returns:
    %  convolvedFeatures - matrix of convolved features in the form
    %                      convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
    
    numImages = size(images, 4);
    imageDim = size(images, 1);        %% = 64
    imageChannels = size(images, 3);
    
    convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1);
    
    % Instructions:
    %   Convolve every feature with every large image here to produce the 
    %   numFeatures x numImages x (imageDim - patchDim + 1) x (imageDim - patchDim + 1) 
    %   matrix convolvedFeatures, such that 
    %   convolvedFeatures(featureNum, imageNum, imageRow, imageCol) is the
    %   value of the convolved featureNum feature for the imageNum image over
    %   the region (imageRow, imageCol) to (imageRow + patchDim - 1, imageCol + patchDim - 1)
    %
    % Expected running times: 
    %   Convolving with 100 images should take less than 3 minutes 
    %   Convolving with 5000 images should take around an hour
    %   (So to save time when testing, you should convolve with less images, as
    %   described earlier)
    
    % -------------------- YOUR CODE HERE --------------------
    % Precompute the matrices that will be used during the convolution. Recall
    % that you need to take into account the whitening and mean subtraction
    % steps
    
    WT = W*ZCAWhite;    % 能够看exercise中的推导
    b_mean = b - WT * meanPatch;
    
    
    % --------------------------------------------------------
    patchSize = patchDim * patchDim;
    
    convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1);
    for imageNum = 1:numImages
      for featureNum = 1:numFeatures
    
        % convolution of image with feature matrix for each channel
        convolvedImage = zeros(imageDim - patchDim + 1, imageDim - patchDim + 1);
        for channel = 1:imageChannels
    
          % Obtain the feature (patchDim x patchDim) needed during the convolution
          % ---- YOUR CODE HERE ----
          feature = zeros(8,8); % You should replace this
          offset = (channel -1) * patchSize;
          feature = reshape(WT(featureNum, offset+1 : offset+patchSize), patchDim, patchDim);
          
          
          
          % ------------------------
    
          % Flip the feature matrix because of the definition of convolution, as explained later
          feature = flipud(fliplr(squeeze(feature)));
          
          % Obtain the image
          im = squeeze(images(:, :, channel, imageNum));
    
          % Convolve "feature" with "im", adding the result to convolvedImage
          % be sure to do a 'valid' convolution
          % ---- YOUR CODE HERE ----
          convolvedoneChannel = conv2(im, feature, 'valid');    %卷积操作
          convolvedImage = convolvedImage + convolvedoneChannel;    %三通道相加
          
          
          
          % ------------------------
    
        end
        
        % Subtract the bias unit (correcting for the mean subtraction as well)
        % Then, apply the sigmoid function to get the hidden activation
        % ---- YOUR CODE HERE ----
        convolvedIamge = sigmoid(convolvedImage + b_mean(featureNum));    %最后的取值为sigmoid函数得到的结果
        
        
        
        % ------------------------
        
        % The convolved feature is the sum of the convolved values for all channels
        convolvedFeatures(featureNum, imageNum, :, :) = convolvedImage;
      end
    end
    
    
    end
    
    function sigm = sigmoid(x)
        sigm = 1./(1+exp(-x));
    end

    cnnPool.m

    function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
    %cnnPool Pools the given convolved features
    %
    % Parameters:
    %  poolDim - dimension of pooling region
    %  convolvedFeatures - convolved features to pool (as given by cnnConvolve)
    %                      convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
    %
    % Returns:
    %  pooledFeatures - matrix of pooled features in the form
    %                   pooledFeatures(featureNum, imageNum, poolRow, poolCol)
    %     
    
    numImages = size(convolvedFeatures, 2);
    numFeatures = size(convolvedFeatures, 1);
    convolvedDim = size(convolvedFeatures, 3);
    
    resultDim  = floor(convolvedDim / poolDim);
    pooledFeatures = zeros(numFeatures, numImages, resultDim, resultDim);
    
    % -------------------- YOUR CODE HERE --------------------
    % Instructions:
    %   Now pool the convolved features in regions of poolDim x poolDim,
    %   to obtain the 
    %   numFeatures x numImages x (convolvedDim/poolDim) x (convolvedDim/poolDim) 
    %   matrix pooledFeatures, such that
    %   pooledFeatures(featureNum, imageNum, poolRow, poolCol) is the 
    %   value of the featureNum feature for the imageNum image pooled over the
    %   corresponding (poolRow, poolCol) pooling region 
    %   (see http://ufldl/wiki/index.php/Pooling )
    %   
    %   Use mean pooling here.
    % -------------------- YOUR CODE HERE --------------------
    
    for imageNum = 1:numImages
        for featureNum = 1:numFeatures
            for poolRow = 1:resultDim
                offsetRow = 1+(poolRow-1)*poolDim;
                for poolCol = 1:resultDim
                    offsetCol = 1 + (poolCol-1)*poolDim;
                    patch = convolvedFeatures(featureNum, imageNum, offsetRow:offsetRow+poolDim-1, offsetCol:offsetCol+poolDim-1);
                    pooledFeatures(featureNum, imageNum, poolRow, poolCol) = mean(patch(:));
                end
            end
        end
    end
    
    
    end

    cnnExercise.m

    %% CS294A/CS294W Convolutional Neural Networks Exercise
    
    %  Instructions
    %  ------------
    % 
    %  This file contains code that helps you get started on the
    %  convolutional neural networks exercise. In this exercise, you will only
    %  need to modify cnnConvolve.m and cnnPool.m. You will not need to modify
    %  this file.
    
    %%======================================================================
    %% STEP 0: Initialization
    %  Here we initialize some parameters used for the exercise.
    
    imageDim = 64;         % image dimension
    imageChannels = 3;     % number of channels (rgb, so 3)
    
    patchDim = 8;          % patch dimension
    numPatches = 50000;    % number of patches
    
    visibleSize = patchDim * patchDim * imageChannels;  % number of input units 
    outputSize = visibleSize;   % number of output units
    hiddenSize = 400;           % number of hidden units 
    
    epsilon = 0.1;	       % epsilon for ZCA whitening
    
    poolDim = 19;          % dimension of pooling region
    
    %%======================================================================
    %% STEP 1: Train a sparse autoencoder (with a linear decoder) to learn 
    %  features from color patches. If you have completed the linear decoder
    %  execise, use the features that you have obtained from that exercise, 
    %  loading them into optTheta. Recall that we have to keep around the 
    %  parameters used in whitening (i.e., the ZCA whitening matrix and the
    %  meanPatch)
    
    % --------------------------- YOUR CODE HERE --------------------------
    % Train the sparse autoencoder and fill the following variables with 
    % the optimal parameters:
    
    optTheta =  zeros(2*hiddenSize*visibleSize+hiddenSize+visibleSize, 1);
    ZCAWhite =  zeros(visibleSize, visibleSize);
    meanPatch = zeros(visibleSize, 1);
    
    load STL10Features.mat;
    
    % --------------------------------------------------------------------
    
    % Display and check to see that the features look good
    W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
    b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
    
    displayColorNetwork( (W*ZCAWhite)');
    
    %%======================================================================
    %% STEP 2: Implement and test convolution and pooling
    %  In this step, you will implement convolution and pooling, and test them
    %  on a small part of the data set to ensure that you have implemented
    %  these two functions correctly. In the next step, you will actually
    %  convolve and pool the features with the STL10 images.
    
    %% STEP 2a: Implement convolution
    %  Implement convolution in the function cnnConvolve in cnnConvolve.m
    
    % Note that we have to preprocess the images in the exact same way 
    % we preprocessed the patches before we can obtain the feature activations.
    
    load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels
    
    %% Use only the first 8 images for testing
    convImages = trainImages(:, :, :, 1:8); 
    
    % NOTE: Implement cnnConvolve in cnnConvolve.m first!
    convolvedFeatures = cnnConvolve(patchDim, hiddenSize, convImages, W, b, ZCAWhite, meanPatch);
    
    %% STEP 2b: Checking your convolution
    %  To ensure that you have convolved the features correctly, we have
    %  provided some code to compare the results of your convolution with
    %  activations from the sparse autoencoder
    
    % For 1000 random points
    for i = 1:1000    
        featureNum = randi([1, hiddenSize]);
        imageNum = randi([1, 8]);
        imageRow = randi([1, imageDim - patchDim + 1]);
        imageCol = randi([1, imageDim - patchDim + 1]);    
       
        patch = convImages(imageRow:imageRow + patchDim - 1, imageCol:imageCol + patchDim - 1, :, imageNum);
        patch = patch(:);            
        patch = patch - meanPatch;
        patch = ZCAWhite * patch;
        
        features = feedForwardAutoencoder(optTheta, hiddenSize, visibleSize, patch); 
    
        if abs(features(featureNum, 1) - convolvedFeatures(featureNum, imageNum, imageRow, imageCol)) > 1e-9
            fprintf('Convolved feature does not match activation from autoencoder
    ');
            fprintf('Feature Number    : %d
    ', featureNum);
            fprintf('Image Number      : %d
    ', imageNum);
            fprintf('Image Row         : %d
    ', imageRow);
            fprintf('Image Column      : %d
    ', imageCol);
            fprintf('Convolved feature : %0.5f
    ', convolvedFeatures(featureNum, imageNum, imageRow, imageCol));
            fprintf('Sparse AE feature : %0.5f
    ', features(featureNum, 1));       
            error('Convolved feature does not match activation from autoencoder');
        end 
    end
    
    disp('Congratulations! Your convolution code passed the test.');
    
    %% STEP 2c: Implement pooling
    %  Implement pooling in the function cnnPool in cnnPool.m
    
    % NOTE: Implement cnnPool in cnnPool.m first!
    pooledFeatures = cnnPool(poolDim, convolvedFeatures);
    
    %% STEP 2d: Checking your pooling
    %  To ensure that you have implemented pooling, we will use your pooling
    %  function to pool over a test matrix and check the results.
    
    testMatrix = reshape(1:64, 8, 8);
    expectedMatrix = [mean(mean(testMatrix(1:4, 1:4))) mean(mean(testMatrix(1:4, 5:8))); ...
                      mean(mean(testMatrix(5:8, 1:4))) mean(mean(testMatrix(5:8, 5:8))); ];
                
    testMatrix = reshape(testMatrix, 1, 1, 8, 8);
            
    pooledFeatures = squeeze(cnnPool(4, testMatrix));
    
    if ~isequal(pooledFeatures, expectedMatrix)
        disp('Pooling incorrect');
        disp('Expected');
        disp(expectedMatrix);
        disp('Got');
        disp(pooledFeatures);
    else
        disp('Congratulations! Your pooling code passed the test.');
    end
    
    %%======================================================================
    %% STEP 3: Convolve and pool with the dataset
    %  In this step, you will convolve each of the features you learned with
    %  the full large images to obtain the convolved features. You will then
    %  pool the convolved features to obtain the pooled features for
    %  classification.
    %
    %  Because the convolved features matrix is very large, we will do the
    %  convolution and pooling 50 features at a time to avoid running out of
    %  memory. Reduce this number if necessary
    
    stepSize = 50;
    assert(mod(hiddenSize, stepSize) == 0, 'stepSize should divide hiddenSize');
    
    load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels
    load stlTestSubset.mat  % loads numTestImages,  testImages,  testLabels
    
    pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, ...
        floor((imageDim - patchDim + 1) / poolDim), ...
        floor((imageDim - patchDim + 1) / poolDim) );
    pooledFeaturesTest = zeros(hiddenSize, numTestImages, ...
        floor((imageDim - patchDim + 1) / poolDim), ...
        floor((imageDim - patchDim + 1) / poolDim) );
    
    tic();
    
    for convPart = 1:(hiddenSize / stepSize)
        
        featureStart = (convPart - 1) * stepSize + 1;
        featureEnd = convPart * stepSize;
        
        fprintf('Step %d: features %d to %d
    ', convPart, featureStart, featureEnd);  
        Wt = W(featureStart:featureEnd, :);
        bt = b(featureStart:featureEnd);    
        
        fprintf('Convolving and pooling train images
    ');
        convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
            trainImages, Wt, bt, ZCAWhite, meanPatch);
        pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
        pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;   
        toc();
        clear convolvedFeaturesThis pooledFeaturesThis;
        
        fprintf('Convolving and pooling test images
    ');
        convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
            testImages, Wt, bt, ZCAWhite, meanPatch);
        pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
        pooledFeaturesTest(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;   
        toc();
    
        clear convolvedFeaturesThis pooledFeaturesThis;
    
    end
    
    
    % You might want to save the pooled features since convolution and pooling takes a long time
    save('cnnPooledFeatures.mat', 'pooledFeaturesTrain', 'pooledFeaturesTest');
    toc();
    
    %%======================================================================
    %% STEP 4: Use pooled features for classification
    %  Now, you will use your pooled features to train a softmax classifier,
    %  using softmaxTrain from the softmax exercise.
    %  Training the softmax classifer for 1000 iterations should take less than
    %  10 minutes.
    
    % Add the path to your softmax solution, if necessary
    % addpath /path/to/solution/
    
    % Setup parameters for softmax
    softmaxLambda = 1e-4;
    numClasses = 4;
    % Reshape the pooledFeatures to form an input vector for softmax
    softmaxX = permute(pooledFeaturesTrain, [1 3 4 2]);
    softmaxX = reshape(softmaxX, numel(pooledFeaturesTrain) / numTrainImages,...
        numTrainImages);
    softmaxY = trainLabels;
    
    options = struct;
    options.maxIter = 200;
    softmaxModel = softmaxTrain(numel(pooledFeaturesTrain) / numTrainImages,...
        numClasses, softmaxLambda, softmaxX, softmaxY, options);
    
    %%======================================================================
    %% STEP 5: Test classifer
    %  Now you will test your trained classifer against the test images
    
    softmaxX = permute(pooledFeaturesTest, [1 3 4 2]);
    softmaxX = reshape(softmaxX, numel(pooledFeaturesTest) / numTestImages, numTestImages);
    softmaxY = testLabels;
    
    [pred] = softmaxPredict(softmaxModel, softmaxX);
    acc = (pred(:) == softmaxY(:));
    acc = sum(acc) / size(acc, 1);
    fprintf('Accuracy: %2.3f%%
    ', acc * 100);
    
    % You should expect to get an accuracy of around 80% on the test images.
    
    终于得到的准确率为80.406%

  • 相关阅读:
    安装pip-9.0.1-py2.py3-none-any.whl
    Flask-Moment本地化日期和时间
    在windows下使用VirtualEnv建立flask项目
    java web中Jdbc访问数据库步骤通俗解释(吃饭),与MVC的通俗解释(做饭)
    java web中cookie的永久创建与撤销
    myeclipse与数据库进行连接(无需写代码进行验证)
    求二维数组最大子数组和
    Java代码实现 增删查 + 分页——实习第四天
    JDBC 数据库连接操作——实习第三天
    Java基础语法实例(2)——实习第二天
  • 原文地址:https://www.cnblogs.com/claireyuancy/p/6905700.html
Copyright © 2011-2022 走看看