zoukankan      html  css  js  c++  java
  • 斯坦福机器学习课程 Exercise 习题三

    Exercise 3: Multivariate Linear Regression

    预处理数据
    Preprocessing the inputs will significantly increase gradient descent’s efficiency

    Matlab代码

    x=load('L:\MachineLearning2016\ex3x.dat');
    y=load('L:\MachineLearning2016\ex3y.dat');
    m = length(x(:,1));
    x = [ones(m, 1), x];
    sigma = std(x);
    mu = mean(x);
    x(:,2) = (x(:,2) - mu(2))./ sigma(2);
    x(:,3) = (x(:,3) - mu(3))./ sigma(3);%%规整化输入数据
    theta = zeros( size( x(1,:) ) )';
    alpha= 0.18;
    J = zeros(50, 1); %%这里只迭代50次
    for num_iterations = 1:50
        J(num_iterations) =  (x*theta - y)' * (x * theta -y) /m/2; %% Calculate your cost function here %%
        theta = theta -( (x*theta -y)' * x)' * alpha /m /2; %% Result of gradient descent update %%
    end
    
    % now plot J
    % technically, the first J starts at the zero-eth iteration
    % but Matlab/Octave doesn't have a zero index
    figure;
    plot(0:49, J(1:50), '-')
    xlabel('Number of iterations')
    ylabel('Cost J')
    
    %Prediction
    realx =[1,1650,3];
    realx(2) = (realx(2) - mu(2))./ sigma(2);
    realx(3) = (realx(3) - mu(3))./ sigma(3);
    realx*theta

    Normal equations

    不对数据进行预处理

    x=load('L:\MachineLearning2016\ex3x.dat');
    y=load('L:\MachineLearning2016\ex3y.dat');
    m = length(x(:,1));
    x = [ones(m, 1), x];
    theta = (x'*x) (x'*y);
    J3=  (x*theta - y)' * (x * theta -y)/m/2; 
    realx =[1,1650,3];
    realx*theta;

    TIPS:Normal equations 好处是不用对数据进行规整化。缺点是矩阵运算比较占用计算机资源。

    这里有个疑问,对要预测的数据的处理,[1,1650,3],规整化是否是直接使用样本数据的均值和标准差。
    注:上面的 scale data 只迭代了50次,与Normal equations的结果有较大误差。我就懒得去验证了。

  • 相关阅读:
    自定义Collection类
    基本排序算法(冒泡排序,选择排序,插入排序)
    泛型
    XSD的学习
    SSH整合配置
    一个可以随时插入的json的简单实现
    将Properties文件的键值对调换位置重新生成一个文件
    JAVA MD5加密
    框架中退出登录
    java 生成 xml
  • 原文地址:https://www.cnblogs.com/slankka/p/9158537.html
Copyright © 2011-2022 走看看