zoukankan      html  css  js  c++  java
  • Kaggle—Digit Recognizer竞赛

    Digit Recognizer


    手写体数字识别  MNIST数据集

    本赛 train 42000样例 test 28000样例,原始MNIST是 train 60000 test 10000

    我分别用 Logistic Regression/ 784-200-200-10的Sparse AutoEncoder/Convolution AutoEncoder刷了下






    ===============方法一、 One-Vs-All 的Logistic Regression===================

    %% 
    ccc
    load digitData
    
    %%
    input_layer_size  = 28*28;  
    num_ys = 10;         
                         
    X = train_x;
    [~,y] = max(train_y, [], 2);
    
    lambda = 0.1;
    lambda = 100;
    [all_theta] = oneVsAll(X, y, num_ys, lambda);
    
    %% ================ Part: Predict for One-Vs-All ================
    %  After ...
    pred = predictOneVsAll(all_theta, X);
    fprintf('
    Training Set Accuracy: %f
    ', mean(double(pred == y)) * 100);
    
    %% ============== 计算test准确度(test_y 是基于KNN的 只作为参考)
    [~,test_y] = max(test_y, [], 2);
    
    pred = predictOneVsAll(all_theta, test_x);
    fprintf('
    Test Set Accuracy: %f
    ', mean(double(pred == test_y)) * 100);
    
    %% write csv file
    pred(pred==10) = 0;
    M = [(1:length(pred))' pred(:)];
    csvwrite('LiFeiteng0824.csv',M)
    
    



    ===============方法二、 784-200-200-10的Sparse AutoEncoder ===================

    %% STEP 0: Here we provide the relevant parameters values that will
    tic
    
    inputDim = 28;
    inputSize = 28 * 28;
    numClasses = 10;
    hiddenSizeL1 = 200;    % Layer 1 Hidden Size
    hiddenSizeL2 = 200;    % Layer 2 Hidden Size
    sparsityParam = 0.1;   % desired average activation of the hidden units.
                           % (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",
    		               %  in the lecture notes). 
    lambda = 3e-3;         % weight decay parameter       
    beta = 3;              % weight of sparsity penalty term       
    maxIter = 100;    
    
    
    %% STEP 1: Load data
    load digitData
    trainData = train_x';
    [~, trainLabels] = max(train_y, [], 2);
    %%% 增加数据 
    
    %%% ZCA白化 像素值范围变化 []
    % trainData = ZCAWhite(trainData);
    
    %% STEP 2: Train the first sparse autoencoder
    sae1Theta = initializeParameters(hiddenSizeL1, inputSize);
    
    options.Method = 'lbfgs';
    options.maxIter = 200;	  % Maximum number of iterations of L-BFGS to run 
    options.display = 'on';
    [sae1OptTheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, ...
                                       inputSize, hiddenSizeL1, ...
                                       lambda, sparsityParam, ...
                                       beta, trainData), ...
                                  sae1Theta, options);
    
    % -------------------------------------------------------------------------
    W1 = reshape(sae1OptTheta(1:hiddenSizeL1*inputSize), hiddenSizeL1, inputSize);
    display_network(W1', 12); 
    
    
    %% STEP 2: Train the second sparse autoencoder
    [sae1Features] = feedForwardAutoencoder(sae1OptTheta, hiddenSizeL1, ...
                                            inputSize, trainData);
    
    %  Randomly initialize the parameters
    sae2Theta = initializeParameters(hiddenSizeL2, hiddenSizeL1);
    
    options.Method = 'lbfgs';
    options.maxIter = 100;	  % Maximum number of iterations of L-BFGS to run 
    options.display = 'on';
    
    [sae2OptTheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, ...
                                       size(sae1Features,1), hiddenSizeL2, ...
                                       lambda, sparsityParam, ...
                                       beta, sae1Features), ...
                                  sae2Theta, options);
    
    %% STEP 3: Train the softmax classifier
    
    [sae2Features] = feedForwardAutoencoder(sae2OptTheta, hiddenSizeL2, ...
                                            hiddenSizeL1, sae1Features);
    
    %  Randomly initialize the parameters
    saeSoftmaxTheta = 0.005 * randn(hiddenSizeL2 * numClasses, 1);
    
    
    lambda = 1e-4;
    options.maxIter = 200;
    softmaxModel = softmaxTrain(hiddenSizeL2, numClasses, lambda, ...
                                sae2Features, trainLabels, options);
    % -------------------------------------------------------------------------
     saeSoftmaxOptTheta = softmaxModel.optTheta(:);
    
    
    %% STEP 5: Finetune softmax model
    
    % Implement the stackedAECost to give the combined cost of the whole model
    % then run this cell.
    
    % Initialize the stack using the parameters learned
    stack = cell(2,1);
    stack{1}.w = reshape(sae1OptTheta(1:hiddenSizeL1*inputSize), ...
                         hiddenSizeL1, inputSize);
    stack{1}.b = sae1OptTheta(2*hiddenSizeL1*inputSize+1:2*hiddenSizeL1*inputSize+hiddenSizeL1);
    stack{2}.w = reshape(sae2OptTheta(1:hiddenSizeL2*hiddenSizeL1), ...
                         hiddenSizeL2, hiddenSizeL1);
    stack{2}.b = sae2OptTheta(2*hiddenSizeL2*hiddenSizeL1+1:2*hiddenSizeL2*hiddenSizeL1+hiddenSizeL2);
    
    % Initialize the parameters for the deep model
    [stackparams, netconfig] = stack2params(stack);
    stackedAETheta = [ saeSoftmaxOptTheta ; stackparams ];
    
    options.Method = 'lbfgs'; 
    options.maxIter = 400;	  % Maximum number of iterations of L-BFGS to run 
    options.display = 'on';
    
    [stackedAEOptTheta, cost] = minFunc( @(p) stackedAECost(p, ...
                                       hiddenSizeL2 , hiddenSizeL2, ...
                                       numClasses, netconfig, ...
                                       lambda, trainData, trainLabels), ...
                                  stackedAETheta, options);
    
    % -------------------------------------------------------------------------
    
    %% STEP 6: Test 
    %  Instructions: You will need to complete the code in stackedAEPredict.m
    %                before running this part of the code
    %
    
    testData = test_x';
    [~, testLabels] = max(test_y, [], 2);
    
    [pred] = stackedAEPredict(stackedAETheta, inputSize, hiddenSizeL2, ...
                              numClasses, netconfig, testData);
    
    acc = mean(testLabels(:) == pred(:));
    fprintf('Before Finetuning Test Accuracy: %0.3f%%
    ', acc * 100);
    
    [pred] = stackedAEPredict(stackedAEOptTheta, inputSize, hiddenSizeL2, ...
                              numClasses, netconfig, testData);
    
    acc = mean(testLabels(:) == pred(:));
    fprintf('After Finetuning Test Accuracy: %0.3f%%
    ', acc * 100);
    toc
    
    pred(pred==10) = 0;
    tmp = [(1:length(pred))' pred(:)];
    csvwrite('LiFeiteng0824.csv',tmp)
    
    
    


    test准确率 基于Knn的pred-label 



    ===============方法三、 784-200-200-10的Sparse AutoEncoder ===================

    使用DeepLearnToolbox

    %% 
    clear
    close all
    clc
    
    %% load data label
    load digitData
    
    %%% pre-processing 
    %% ex2 train a X-X hidden unit SDAE and use it to initialize a FFNN
    %  Setup and train a stacked denoising autoencoder (SDAE)
    rng(0);
    nDim = [784 200 200];
    sae = saesetup(nDim);
    sae.ae{1}.activation_function       = 'sigm';
    sae.ae{1}.learningRate              = 1;
    sae.ae{1}.inputZeroMaskedFraction   = 0.5;
    
    sae.ae{2}.activation_function       = 'sigm';
    sae.ae{2}.learningRate              = 1;
    sae.ae{2}.inputZeroMaskedFraction   = 0.5;
    
    % sae.ae{3}.activation_function       = 'sigm';
    % sae.ae{3}.learningRate              = 0.8;
    % sae.ae{3}.inputZeroMaskedFraction   = 0.5;
    
    opts.numepochs =   30;
    opts.batchsize = 100;
    % opts.sparsityTarget = 0.05;%$LiFeiteng
    % opts.nonSparsityPenalty = 1;
    opts.dropoutFraction = 0.5;
    
    sae = saetrain(sae, train_x, opts);
    visualize(sae.ae{1}.W{1}(:,2:end)')
    
    %% Use the SDAE to initialize a FFNN
    nn = nnsetup([nDim 10]);
    nn.activation_function              = 'sigm';%'sigm';
    nn.learningRate                     = 1;
    
    %add pretrained weights
    nn.W{1} = sae.ae{1}.W{1};
    nn.W{2} = sae.ae{2}.W{1};
    %nn.W{3} = sae.ae{3}.W{1};
    
    % Train the FFNN
    fprintf('
    ')
    opts.numepochs =   40;
    opts.batchsize = 100;
    nn = nntrain(nn, train_x, train_y, opts);
    
    %% test
    [er, bad, pred] = nntest(nn, test_x, test_y);
    
    pred(pred==10) = 0;
    tmp = [(1:length(pred))' pred(:)];
    csvwrite('LiFeiteng0824.csv',tmp)
    


    start of the art!

    ==================================================================

    排名200多好伤感!!!


    Leaderboard上好多100%的,其实我也可以做到——作弊——把错误的部分 逐一用肉眼扫下,更改test_label就可,不过这就没意思了。


    Y. LeCun 维护的

    THE MNIST DATABASE

    最好成绩:


    ==============================

    可以提高准确率的方法:

    1.增加train的个数,对增加原始图像 平移 旋转等构造新图像;

    2.对图像做预处理等;直接用PCA or ZCA白化 会改变像素值范围;

    3.卷积-池化等加入Deep Networks中去;

    4.New Model。。。















  • 相关阅读:
    [转]Windows Azure入门教学系列 (六):使用Table Storage
    [书目20140902]实战Windows Azure——微软云计算平台技术详解 --徐子岩
    [转]IE11下Forms身份认证无法保存Cookie的问题
    [转]C#开发ActiveX控件,.NET开发OCX控件案例
    [转]查询表达式 (F#)
    [转]符号和运算符参考 (F#)
    [转]F# Samples 101
    [转]Walkthrough: Your First F# Program
    [转]Keyword Reference (F#)
    [转]Visual F# Samples and Walkthroughs
  • 原文地址:https://www.cnblogs.com/riskyer/p/3279917.html
Copyright © 2011-2022 走看看