zoukankan      html  css  js  c++  java
  • TensorFlow | ReluGrad input is not finite. Tensor had NaN values

    问题的出现 Question

    这个问题是我基于TensorFlow使用CNN训练MNIST数据集的时候遇到的。关键的相关代码是以下这部分:

    cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
    train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    

    学习速率是((1e-4))的时候是没有问题,但是当我把学习速率调到(0.01/0.5)的时候,很快就会报错。

    tensorflow.python.framework.errors.InvalidArgumentError: ReluGrad input is not finite. : Tensor had NaN values
    

    分析 Analysis

    学习速率 Learning Rate

    于是我尝试加上几行代码,希望能把y_conv和cross_entropy的状态反映出来。

    y_conv=tf.Print(y_conv,[y_conv],"y_conv: ")
    cross_entropy =tf.Print(cross_entropy,[cross_entropy],"cross_entropy: ")
    

    当learning rate (=0.01)时,程序会报错:

    I tensorflow/core/kernels/logging_ops.cc:64] y_conv: [3.0374929e-06 0.0059775524 0.980205...]
    step 0, training accuracy 0.04
    I tensorflow/core/kernels/logging_ops.cc:64] y_conv: [9.2028862e-10 1.4812358e-05 0.044873074...]
    I tensorflow/core/kernels/logging_ops.cc:64] cross_entropy: [648.49146]
    I tensorflow/core/kernels/logging_ops.cc:64] y_conv: [0.024463326 1.4828938e-31 0...]
    step 1, training accuracy 0.2
    I tensorflow/core/kernels/logging_ops.cc:64] y_conv: [2.4634053e-11 3.3087209e-34 0...]
    I tensorflow/core/kernels/logging_ops.cc:64] cross_entropy: [nan]
    step 2, training accuracy 0.14
    I tensorflow/core/kernels/logging_ops.cc:64] y_conv: [nan nan nan...]
    W tensorflow/core/common_runtime/executor.cc:1027] 0x7ff51d92a940 Compute status: Invalid argument: ReluGrad input is not finite. : Tensor had NaN values
    

    当learning rate (=1e-4)时,程序不会报错。

    I tensorflow/core/kernels/logging_ops.cc:64] y_conv: [0.00056920078 8.4922984e-09 0.00033719366...]
    step 0, training accuracy 0.14
    I tensorflow/core/kernels/logging_ops.cc:64] y_conv: [7.0613837e-10 9.28294e-09 0.00016230672...]
    I tensorflow/core/kernels/logging_ops.cc:64] cross_entropy: [439.95135]
    step 1, training accuracy 0.16
    I tensorflow/core/kernels/logging_ops.cc:64] y_conv: [0.031509314 3.6221365e-05 0.015359053...]
    I tensorflow/core/kernels/logging_ops.cc:64] y_conv: [3.7112056e-07 1.8543299e-09 8.9234991e-06...]
    I tensorflow/core/kernels/logging_ops.cc:64] cross_entropy: [436.37653]
    step 2, training accuracy 0.12
    I tensorflow/core/kernels/logging_ops.cc:64] y_conv: [0.015578311 0.0026688741 0.44736364...]
    I tensorflow/core/kernels/logging_ops.cc:64] y_conv: [6.0428465e-07 0.0001744287 0.026451336...]
    I tensorflow/core/kernels/logging_ops.cc:64] cross_entropy: [385.33765]
    

    至此,我们可以看到,学习速率太大是产生error其中一个原因。

    参考斯坦福CS 224D的Lecture Note,在训练深度神经网络的时候,出现NaN比较大的可能是因为学习速率过大,梯度值过大,产生梯度爆炸。
    Refer to the lecture note of Stanford CS 224D, a precise definition of Gradient Explosion is:

    During experimentation, once the gradient value grows extremely large, it causes an overflow (i.e. NaN) which is easily detectable at runtime; this issue is called the Gradient Explosion Problem.

    解决方法 Solutions

    1. 适当减小学习速率 Try to decrease the learning rate.
    2. 加入Gradient clipping的方法。 Gradient clipping的方法最早是由Thomas Mikolov提出的。每当梯度达到一定的阈值,就把他们设置回一个小一些的数字。
      Refer to the lecture note of Stanford CS 224D, use gradient clipping.

    To solve the problem of exploding gradients, Thomas Mikolov first introduced a simple heuristic solution that clips gradients to a small number whenever they explode. That is, whenever they reach a certain threshold, they are set back to a small number as shown in Algorithm 1.
    Algorithm 1:
    (frac{partial E}{partial W} o g)
    if $ Vert gVertge threshold$ then
    (frac {threshold}{Vert gVert} g o g)
    end if

  • 相关阅读:
    ASP.NET WebApi项目框架搭建(六):数据库ORM之Sqlsugar
    sqlsugar与数据库之间的相互操作
    C# SqlSugar框架的学习使用(一)SqlSugar简介及创建
    SqlSugar直接执行Sql
    在项目中迁移MS SQLServer到Mysql数据库,实现MySQL数据库的快速整合
    SqlSugar 简易操作数据库
    C# SqlSugar框架的学习使用(二) 类的生成及增删改查的应用
    使用开源框架Sqlsugar结合mysql开发一个小demo
    devops起源的各种ops概念
    STC8H开发(四): FwLib_STC8 封装库的介绍和注意事项
  • 原文地址:https://www.cnblogs.com/rgvb178/p/7226890.html
Copyright © 2011-2022 走看看