zoukankan      html  css  js  c++  java
  • tensorflow2.0学习笔记第二章第四节

    2.4损失函数
    损失函数(loss):预测值(y)与已知答案(y_)的差距
    nn优化目标:loss最小->-mse
    -自定义
    -ce(cross entropy)
    均方误差mse:MSE(y_,y)=E^n~i=1(y-y_)^2/n
    loss_mse = tf.reduce_mean(tf.square(y_-y))
    import tensorflow as tf
    import numpy as np
    
    SEED = 23455
    
    rdm = np.random.RandomState(seed=SEED)
    x = rdm.rand(32,2)
    y_ = [[x1 + x2 + (rdm.rand()/10.0 - 0.05)] for (x1,x2) in x] # 生成【0,1】/10-0.05的噪声
    x = tf.cast(x,dtype = tf.float32)
    
    w1 = tf.Variable(tf.random.normal([2,1],stddev=1, seed = 1)) # 创建一个2行一列的参数矩阵
    
    epoch = 15000
    lr = 0.002
    
    for epoch in range(epoch):
        with tf.GradientTape() as tape:
            y = tf.matmul(x,w1)
            loss_mse = tf.reduce_mean(tf.square(y_-y))
    
        grads = tape.gradient(loss_mse,w1) # loss_mse对w1求导
        w1.assign_sub(lr*grads) # 在原本w1上减去lr(学习率)*求导结果
    
        if epoch % 500 == 0:
            print('After %d training steps,w1 is'%(epoch))
            print(w1.numpy(),"
    ")
    print("Final w1 is:",w1.numpy())

    结果:

    After 0 training steps,w1 is
    [[-0.8096241]
    [ 1.4855157]]

    After 500 training steps,w1 is
    [[-0.21934733]
    [ 1.6984866 ]]

    After 1000 training steps,w1 is
    [[0.0893971]
    [1.673225 ]]

    After 1500 training steps,w1 is
    [[0.28368822]
    [1.5853055 ]]

    ........

    ........

    After 14000 training steps,w1 is
    [[0.9993659]
    [0.999166 ]]

    After 14500 training steps,w1 is
    [[1.0002553 ]
    [0.99838644]]

    Final w1 is: [[1.0009792]
    [0.9977485]]


    自定义损失函数
    如预测商品销量,预测多了,损失成本,预测少了损失利润
    若利润!=成本,则mse产生的loss无法利益最大化
    自定义损失函数 loss(y_-y)=Ef(y_-y)

    f(y_-y)={profit*(y_-y) ,y<y_ 预测少了,损失利润
    {cost*(y_-y) ,y>y_ 预测多了,损失成本

    写出函数:
    loss_zdy = tf.reduce_sum(tf.where(tf.greater(y_,y),(profit*(y_-y),cost*(y-y_) )))

    假设商品成本1元,利润99元,则预测后的参数偏大,预测销量较高,反之成本为99利润为1则参数小,销售预测较小
    import tensorflow as tf
    import numpy as np
    
    profit = 1
    cost = 99
    SEED = 23455
    rdm = np.random.RandomState(seed=SEED)
    x = rdm.rand(32,2)
    x = tf.cast(x,tf.float32)
    y_ = [[x1+x2 + rdm.rand()/10.0-0.05] for x1,x2 in x]
    w1 = tf.Variable(tf.random.normal([2,1],stddev=1,seed=1))
    
    epoch = 10000
    lr = 0.002
    
    for epoch in range(epoch):
        with tf.GradientTape() as tape:
            y = tf.matmul(x,w1)
            loss_zdy = tf.reduce_sum(tf.where(tf.greater(y_,y),(y_-y)*profit,(y-y_)*cost))
    
            grads = tape.gradient(loss_zdy,w1)
            w1.assign_sub(lr*grads)
    
        if epoch % 500 == 0:
            print("after %d epoch w1 is:"%epoch)
            print(w1.numpy(),'
    ')
            print('--------------')
    print('final w1 is',w1.numpy())
    
    # 当成本=1,利润=99模型的两个参数[[1.1231122][1.0713713]] 均大于1模型在往销量多的预测
    # 当成本=99,利润=1模型的两个参数[[0.95219666][0.909771  ]] 均小于1模型在往销量少的预测

    交叉熵损失函数CE(cross entropy),表示两个概率分布之间的距离
    H(y_,y)= -Ey_*lny
    如:二分类中标准答案y_=(1,0),预测y1=(0.6,0.4),y2=(0.8,0.2)
    哪个更接近标准答案?
    H1((1,0),(0.6,0.4))=-(1*ln0.6 + 0*ln0.4) =0.511
    H2((1,0),(0.8,0.2))=0.223
    因为h1>H2,所以y2预测更准
    tf中交叉熵的计算公式:
    tf.losses.categorical_crossentropy(y_,y)
    import tensorflow as tf
    loss_ce1 = tf.losses.categorical_crossentropy([1,0],[0.6,0.4])
    loss_ce2 = tf.losses.categorical_crossentropy([1,0],[0.8,0.2])
    print("loss_ce1",loss_ce1)
    print("loss_ce2",loss_ce2)
    #loss_ce1 tf.Tensor(0.5108256, shape=(), dtype=float32)
    #loss_ce2 tf.Tensor(0.22314353, shape=(), dtype=float32)
    # 结果loss_ce2数值更小更接近
    softmax与交叉熵结合
    输出先过softmax,再计算y_和y的交叉损失函数
    tf.nn.softmax_cross_entroy_with_logits(y_,y)
  • 相关阅读:
    MyEclipse配置Tomcat 并编写第一个JSP程序
    ubuntu安装postgresql与postgis
    ubuntu12.10升级至14.04
    ubuntu 12.10无法用apt-get安装软件 Err http://us.archive.ubuntu.com quantal-updates/main Sources 404 Not
    hive0.13网络接口安装
    hive报错 Another instance of Derby may have already booted the database
    前端开发者进阶之函数柯里化Currying
    js中的事件委托
    while 和 for 对比
    小图标文字对齐的终极解决方案
  • 原文地址:https://www.cnblogs.com/wigginess/p/13048901.html
Copyright © 2011-2022 走看看