zoukankan      html  css  js  c++  java
  • TensorFlow RNN 教程和代码


    分析:
    看 TensorFlow 也有一段时间了,准备按照 GitHub 上的教程,敲出来,顺便整理一下思路。
    RNN部分
    1. 定义参数,包括数据相关,训练相关。
    2. 定义模型,损失函数,优化函数。
    3. 训练,准备数据,输入数据,输出结果。

    代码:

    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    
    import tensorflow as tf
    from tensorflow.examples.tutorials.mnist import input_data
    from tensorflow.contrib import rnn
    
    mnist=input_data.read_data_sets("./data",one_hot=True)
    
    training_rate=0.001
    training_iters=100000
    batch_size=128
    display_step=10
    
    n_input=28
    n_steps=28
    n_hidden=128
    n_classes=10
    
    x=tf.placeholder("float",[None,n_steps,n_input])
    y=tf.placeholder("float",[None,n_classes])
    
    weights={'out':tf.Variable(tf.random_normal([n_hidden,n_classes]))}
    biases={'out':tf.Variable(tf.random_normal([n_classes]))}
    
    def RNN(x,weights,biases):
       x=tf.unstack(x,n_steps,1)
       lstm_cell=rnn.BasicLSTMCell(n_hidden,forget_bias=1.0)
       outputs,states=rnn.static_rnn(lstm_cell,x,dtype=tf.float32)
       return tf.matmul(outputs[-1],weights['out'])+biases['out']
    
    pred=RNN(x,weights,biases)
    cost=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels=y))
    optimizer=tf.train.AdamOptimizer(learning_rate=training_rate).minimize(cost)
    
    correct_pred=tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
    accuaracy=tf.reduce_mean(tf.cast(correct_pred,tf.float32))
    
    init=tf.global_variables_initializer()
    
    with tf.Session() as sess:
       sess.run(init)
       step=1
       while step*batch_size<training_iters:
          batch_x,batch_y=mnist.train.next_batch(batch_size)
          batch_x=batch_x.reshape(batch_size,n_steps,n_input)
          sess.run(optimizer,feed_dict={x:batch_x,y:batch_y})
          if step%display_step==0:
             acc=sess.run(accuaracy,feed_dict={x:batch_x,y:batch_y})
             loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
             print("Iter " + str(step * batch_size) + ", Minibatch Loss= " + 
                   "{:.6f}".format(loss) + ", Training Accuracy= " + 
                   "{:.5f}".format(acc))
          step+=1


    输出:

    /anaconda/bin/python2.7 /Users/xxxx/PycharmProjects/TF_3/tf_rnn.py
    Extracting ./data/train-images-idx3-ubyte.gz
    Extracting ./data/train-labels-idx1-ubyte.gz
    Extracting ./data/t10k-images-idx3-ubyte.gz
    Extracting ./data/t10k-labels-idx1-ubyte.gz
    2017-07-15 16:41:15.125981: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    2017-07-15 16:41:15.125994: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
    2017-07-15 16:41:15.125997: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
    2017-07-15 16:41:15.126002: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
    Iter 1280, Minibatch Loss= 1.842738, Training Accuracy= 0.33594
    Iter 2560, Minibatch Loss= 1.489123, Training Accuracy= 0.50000
    Iter 3840, Minibatch Loss= 1.300060, Training Accuracy= 0.57812
    Iter 5120, Minibatch Loss= 1.244872, Training Accuracy= 0.62500
    Iter 6400, Minibatch Loss= 0.947143, Training Accuracy= 0.71094
    Iter 7680, Minibatch Loss= 0.709695, Training Accuracy= 0.75781
    Iter 8960, Minibatch Loss= 0.799844, Training Accuracy= 0.76562
    Iter 10240, Minibatch Loss= 0.594611, Training Accuracy= 0.83594
    Iter 11520, Minibatch Loss= 0.529350, Training Accuracy= 0.82031
    Iter 12800, Minibatch Loss= 0.624426, Training Accuracy= 0.82031
    Iter 14080, Minibatch Loss= 0.481889, Training Accuracy= 0.82812
    Iter 15360, Minibatch Loss= 0.449692, Training Accuracy= 0.84375
    Iter 16640, Minibatch Loss= 0.418820, Training Accuracy= 0.85938
    Iter 17920, Minibatch Loss= 0.412161, Training Accuracy= 0.85156
    Iter 19200, Minibatch Loss= 0.256099, Training Accuracy= 0.90625
    Iter 20480, Minibatch Loss= 0.227309, Training Accuracy= 0.90625
    Iter 21760, Minibatch Loss= 0.431014, Training Accuracy= 0.85938
    Iter 23040, Minibatch Loss= 0.377097, Training Accuracy= 0.87500
    Iter 24320, Minibatch Loss= 0.268153, Training Accuracy= 0.89844
    Iter 25600, Minibatch Loss= 0.170557, Training Accuracy= 0.95312
    Iter 26880, Minibatch Loss= 0.286947, Training Accuracy= 0.91406
    Iter 28160, Minibatch Loss= 0.189623, Training Accuracy= 0.94531
    Iter 29440, Minibatch Loss= 0.228949, Training Accuracy= 0.95312
    Iter 30720, Minibatch Loss= 0.157198, Training Accuracy= 0.94531
    Iter 32000, Minibatch Loss= 0.205744, Training Accuracy= 0.93750
    Iter 33280, Minibatch Loss= 0.195218, Training Accuracy= 0.92188
    Iter 34560, Minibatch Loss= 0.177956, Training Accuracy= 0.92969
    Iter 35840, Minibatch Loss= 0.131563, Training Accuracy= 0.96875
    Iter 37120, Minibatch Loss= 0.215156, Training Accuracy= 0.92969
    Iter 38400, Minibatch Loss= 0.232274, Training Accuracy= 0.94531
    Iter 39680, Minibatch Loss= 0.324053, Training Accuracy= 0.91406
    Iter 40960, Minibatch Loss= 0.196385, Training Accuracy= 0.93750
    Iter 42240, Minibatch Loss= 0.151221, Training Accuracy= 0.95312
    Iter 43520, Minibatch Loss= 0.242021, Training Accuracy= 0.95312
    Iter 44800, Minibatch Loss= 0.304008, Training Accuracy= 0.90625
    Iter 46080, Minibatch Loss= 0.185177, Training Accuracy= 0.93750
    Iter 47360, Minibatch Loss= 0.190960, Training Accuracy= 0.94531
    Iter 48640, Minibatch Loss= 0.141995, Training Accuracy= 0.94531
    Iter 49920, Minibatch Loss= 0.199995, Training Accuracy= 0.94531
    Iter 51200, Minibatch Loss= 0.193773, Training Accuracy= 0.92188
    Iter 52480, Minibatch Loss= 0.151757, Training Accuracy= 0.94531
    Iter 53760, Minibatch Loss= 0.153755, Training Accuracy= 0.94531
    Iter 55040, Minibatch Loss= 0.141472, Training Accuracy= 0.93750
    Iter 56320, Minibatch Loss= 0.168057, Training Accuracy= 0.96094
    Iter 57600, Minibatch Loss= 0.135691, Training Accuracy= 0.96094
    Iter 58880, Minibatch Loss= 0.097003, Training Accuracy= 0.97656
    Iter 60160, Minibatch Loss= 0.274090, Training Accuracy= 0.92188
    Iter 61440, Minibatch Loss= 0.147230, Training Accuracy= 0.95312
    Iter 62720, Minibatch Loss= 0.106019, Training Accuracy= 0.96094
    Iter 64000, Minibatch Loss= 0.101133, Training Accuracy= 0.97656
    Iter 65280, Minibatch Loss= 0.169548, Training Accuracy= 0.93750
    Iter 66560, Minibatch Loss= 0.101966, Training Accuracy= 0.96094
    Iter 67840, Minibatch Loss= 0.106501, Training Accuracy= 0.96875
    Iter 69120, Minibatch Loss= 0.082817, Training Accuracy= 0.96875
    Iter 70400, Minibatch Loss= 0.192926, Training Accuracy= 0.96094
    Iter 71680, Minibatch Loss= 0.086935, Training Accuracy= 0.96875
    Iter 72960, Minibatch Loss= 0.052052, Training Accuracy= 0.98438
    Iter 74240, Minibatch Loss= 0.129968, Training Accuracy= 0.95312
    Iter 75520, Minibatch Loss= 0.058070, Training Accuracy= 0.99219
    Iter 76800, Minibatch Loss= 0.089518, Training Accuracy= 0.96875
    Iter 78080, Minibatch Loss= 0.106092, Training Accuracy= 0.98438
    Iter 79360, Minibatch Loss= 0.223101, Training Accuracy= 0.92188
    Iter 80640, Minibatch Loss= 0.069419, Training Accuracy= 0.97656
    Iter 81920, Minibatch Loss= 0.050585, Training Accuracy= 0.99219
    Iter 83200, Minibatch Loss= 0.048002, Training Accuracy= 0.98438
    Iter 84480, Minibatch Loss= 0.094293, Training Accuracy= 0.96875
    Iter 85760, Minibatch Loss= 0.152253, Training Accuracy= 0.96094
    Iter 87040, Minibatch Loss= 0.085382, Training Accuracy= 0.97656
    Iter 88320, Minibatch Loss= 0.147018, Training Accuracy= 0.95312
    Iter 89600, Minibatch Loss= 0.099780, Training Accuracy= 0.96094
    Iter 90880, Minibatch Loss= 0.118362, Training Accuracy= 0.93750
    Iter 92160, Minibatch Loss= 0.110498, Training Accuracy= 0.96094
    Iter 93440, Minibatch Loss= 0.077664, Training Accuracy= 0.98438
    Iter 94720, Minibatch Loss= 0.070865, Training Accuracy= 0.96094
    Iter 96000, Minibatch Loss= 0.156309, Training Accuracy= 0.94531
    Iter 97280, Minibatch Loss= 0.116825, Training Accuracy= 0.94531
    Iter 98560, Minibatch Loss= 0.099852, Training Accuracy= 0.96875
    Iter 99840, Minibatch Loss= 0.116358, Training Accuracy= 0.96875
    
    Process finished with exit code 0


    原文链接:http://www.tensorflownews.com/2017/07/15/tensorflow-rnn-turorial-mnist-code/


  • 相关阅读:
    12、mybatis学习——mybatis懒加载的设置
    11、mybatis学习——自定义结果映射resultMap以及关联查询
    10、mybatis学习——sqlmapper配置返回list和map结果集
    9、mybatis学习——sqlmapper配置中#{}和${}的区别
    8、mybatis学习——sqlmapper配置文件参数处理(单个参数,多个参数,命名参数)
    7、mybatis学习——mybatis基础增删改&&mybatis获取自增主键
    6、mybatis学习——mapper映射配置
    5、mybatis学习——mybatis多数据库厂商支持
    [20180927]ora-01426.txt
    [20180928]ora-01426(补充).txt
  • 原文地址:https://www.cnblogs.com/panchuangai/p/12568333.html
Copyright © 2011-2022 走看看