zoukankan      html  css  js  c++  java
  • 批量梯度下降(Batch gradient descent) C++

    At each step the weight vector is moved in the direction of the greatest rate of decrease of the error function,

    and so this approach is known as gradient descent(梯度下降法) or steepest descent(最速下降法).

    Techniques that use the whole data set at once are called batch methods.

    With the method of gradient descent used to perform the training, the advantages of batch learning

    include the following:

    1)accurate estimation of the gradient vector(i.e., the derivative of the cost function with respect to the weight vector w),

    thereby guaranteeing, under simple conditions, convergence of the method of steepest descent to a local minimum;

    2)parallalization of the learning process.

    However, from a practical perspective, batch learning is rather demanding in terms of storage requirements.

    #include <iostream>
    #include <vector>
    #include <cmath>
    #include <cfloat>

    /*批量梯度下降法*/
    int main() {
        double datax[]={1,2,3,4,5};
        double datay[]={1,1,2,2,4};
        std::vector<double> v_datax,v_datay;

        for(size_t i=0;i<sizeof(datax)/sizeof(datax[0]);++i) {
            v_datax.push_back(datax[i]);
            v_datay.push_back(datay[i]);
        }

        double a=0,b=0;
        double J=0.0;

        for(std::vector<double>::iterator iterx=v_datax.begin(),itery=v_datay.begin();iterx!=v_datax.end(),itery!=v_datay.end();++iterx,++itery) {
            J+=(a+b*(*iterx)-*itery)*(a+b*(*iterx)-*itery);
        }
        J=J*0.5/v_datax.size();
                                
        while(true) {
            double grad0=0,grad1=0;
            for(std::vector<double>::iterator iterx=v_datax.begin(),itery=v_datay.begin();iterx!=v_datax.end(),itery!=v_datay.end();++iterx,++itery) {
                grad0+=(a+b*(*iterx)-*itery);
                grad1+=(a+b*(*iterx)-*itery)*(*iterx);
            }

            grad0=grad0/v_datax.size();
            grad1=grad1/v_datax.size();

            //0.03为学习率阿尔法
            a=a-0.03*grad0;
            b=b-0.03*grad1;
            double MSE=0;
            
            for(std::vector<double>::iterator iterx=v_datax.begin(),itery=v_datay.begin();iterx!=v_datax.end(),itery!=v_datay.end();++iterx,++itery) {
                MSE+=(a+b*(*iterx)-*itery)*(a+b*(*iterx)-*itery);
            }
            MSE=MSE*0.5/v_datax.size();
            
            if(std::abs(J-MSE)<0.0000001)
                break;
            J=MSE;
        }

        std::cout<<"批量梯度下降法得到的结果:"<<std::endl;
        std::cout<<"a = "<<a<<std::endl;
        std::cout<<"b = "<<b<<std::endl;

        return 0;
    }

    In a statistical context, batch learning may be viewed as a form of statistical inference. It is therefore well suited

    for solving nonlinear regression problems.

  • 相关阅读:
    import 和 from … import 模块的变量、方法引用差异
    python引入模块的五种方式与内置模块
    webdriver定位元素的方法和基础函数的使用
    mysql update语句 in执行效率优化
    服务器配置jupyter notebook
    安装CUDA和cuDNN
    Linux命令后台运行
    Ubuntu查看系统信息(CPU、GPU信息)
    Linux下scp用法简析
    如何解决“This app is damaged and can’t be opened. You should move it to the Trash”
  • 原文地址:https://www.cnblogs.com/donggongdechen/p/7399322.html
Copyright © 2011-2022 走看看