zoukankan      html  css  js  c++  java
  • TensorFlow入门:TensorBoard使用(No scalar data was found的问题)

    1.输入命令开启TensorBoard:

    (tensorflow) C:UsersIRay>python D:softwareanacondaenvs	ensorflowLibsite-packages	ensorflow	ensorboard	ensorboard.py --logdir=D:	mp	ensorflowmnistlogsfully_connected_feed

    2.如果安装了TensorBoard,可以直接使用命令:

    (tensorflow) C:UsersIRay>tensorboard --logdir=D:	mp	ensorflowmnistlogsfully_connected_feed

    3.输入命令后,结果显示:

    Starting TensorBoard b'47' at http://0.0.0.0:6006
    (Press CTRL+C to quit)

    4.此时,到网页上输入地址即可打开,有可能出现意外(IE解析问题),则使用如下地址打开:

    http://localhost:6006/

    如果发现网页显示 “No scalar data was found”等信息,说明未正确打开记录文件。

    需要将terminal的工作路径修改到events log files所在路径,同时注意:logdir=后面所接的文件路径不需要引号(可以使用双引号,单引号会出错)

    (tensorflow) C:UsersIRay>D:
    
    (tensorflow) D:>tensorboard --logdir=D:	mp	ensorflow

    注意清空spyder(或重启),否则会造成events记录叠加。 

    使用summary设置记录Tensor的代码如下:使用MNIST多层神经网络做例子

    # -*- coding: utf-8 -*-
    """
    Created on Mon Sep 11 10:16:34 2017
    
    multy layers softmax regression
    
    @author: Wangjc
    """
    
    import tensorflow as tf
    import tensorflow.examples.tutorials.mnist.input_data as input_data
    #need to show the full address, or error occus.
    mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
    #use read_data_sets to download and load the mnist data set. if has the data, then load.
    #need a long time about 5 minutes
    
    sess = tf.InteractiveSession()
    #link the back-end of C++ to compute.
    #in norm cases, we should create the map and then run in the sussion.
    #now, use a more convenient class named InteractiveSession which could insert compute map when running map.
    
    x=tf.placeholder("float",shape=[None,784])
    y_=tf.placeholder("float",shape=[None,10])
    
    
    def weight_variable(shape):
        #use normal distribution numbers with stddev 0.1 to initial the weight
        initial=tf.truncated_normal(shape, stddev=0.1)
        return tf.Variable(initial)
        
    def bias_variable(shape):
        #use constant value of 0.1 to initial the bias
        initial=tf.constant(0.1, shape=shape)
        return tf.Variable(initial)
    
    def conv2d(x,W):
        #convolution by filter of W,with step size of 1, 0 padding size
        #x should have the dimension of [batch,height,width,channels]
        #other dimension of strides or ksize is the same with x
        return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME')
    
    def max_pool_2x2(x):
        #pool by windows of ksize,with step size of 2, 0 padding size
        return tf.nn.max_pool(x,ksize=[1,2,2,1],
                              strides=[1,2,2,1],padding='SAME')
    
    
    #------------------------------------------------
    x_image = tf.reshape(x, [-1,28,28,1])
    #to use conv1, need to convert x to 4D, in form of [batch,height,width,channels]
    # -1 means default
        
    with tf.name_scope('conv1'):
        #use 'with' and name_scope to define a name space which will show in tensorboard as a ragion
        with tf.name_scope('weight'):
            W_conv1=weight_variable([5,5,1,32])
            tf.summary.histogram('conv1'+'/weight',W_conv1)
            #summary the variation ('name', value) 
        with tf.name_scope('bias'):
            b_conv1=bias_variable([32])
            tf.summary.histogram('conv1'+'/bias',b_conv1)
    #build the first conv layer:
    #get 32 features from every 5*5 patch, so the shape is [5,5,1(channel),32]
    
        h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
    
    with tf.name_scope('pool1'):    
        h_pool1 = max_pool_2x2(h_conv1)
    
    #--------------------------------------------
    with tf.name_scope('conv2'):
        with tf.name_scope('weight'):    
            W_conv2=weight_variable([5,5,32,64])
            tf.summary.histogram('weight',W_conv2)
        with tf.name_scope('bias'):  
            b_conv2=bias_variable([64])
            tf.summary.histogram('bias',b_conv2)
    #build the 2nd conv layer:
    #get 64 features from every 5*5 patch, so the shape is [5,5,32(channel),64]
    
        h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
    with tf.name_scope('pool2'):    
        h_pool2 = max_pool_2x2(h_conv2)
    
    #----------------------------------------
    #image size reduce to 7*7 by pooling
    #we add a full connect layer contains 1027 nuere
    #need to flat pool tensor for caculate
    with tf.name_scope('fc1'):
        with tf.name_scope('weight'):    
            W_fc1 = weight_variable([7*7*64, 1024])
            tf.summary.histogram('weight',W_fc1)
        with tf.name_scope('bias'):
            b_fc1 = bias_variable([1024])
            tf.summary.histogram('bias',b_fc1)
    
        h_pool2_flat = tf.reshape(h_pool2,[-1, 7*7*64])
        
        h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1) + b_fc1)
    
    #------------------------------------
    #output layer
    with tf.name_scope('out'):
        keep_prob = tf.placeholder("float")
        h_fc1_drop = tf.nn.dropout(h_fc1,keep_prob)
    #to decrease overfit, we add dropout before output layer.
    #use placeholder to represent the porbability of a neure's output value unchange
    
        with tf.name_scope('weight'):
            W_fc2 = weight_variable([1024, 10])
            tf.summary.histogram('weight',W_fc2)
        with tf.name_scope('bias'):
            b_fc2 = bias_variable([10])
            tf.summary.histogram('bias',b_fc2)
        y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
    
    #---------------------------------
    #train and evaluate the module
    #use a ADAM
    
    cross_entropy=-tf.reduce_sum(y_*tf.log(y_conv))
    tf.summary.scalar('cross_entropy',cross_entropy)
    ##summary the constant ('name', value) 
    train_step=tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    
    #sess = tf.Session()
    
    merged=tf.summary.merge_all()
    #merge all the summary nodes
    writer=tf.summary.FileWriter('D:/tmp/tensorflow/mnist/',sess.graph)
    # assign the event file write directory 
    
    sess.run(tf.global_variables_initializer())
    for i in range(500):
        batch = mnist.train.next_batch(50)
        if i%100 == 0:
            train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_:batch[1],keep_prob:1.0})
            print("step %d, training accuracy %g"%(i, train_accuracy))
            result=sess.run(merged,feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
            #the merged summary need to be run
            writer.add_summary(result,i)
            #add the result to summary
        train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
        
    print("test accuracy %g"%accuracy.eval(feed_dict={
            x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
  • 相关阅读:
    ClickOnce發布經驗
    reporting Server組件不全引起的致命錯誤
    異步調用
    Usercontrol Hosted in IE
    MATLAB命令大全(转载)
    一种保护眼睛的好方法
    关于oracle自动编号
    An Algorithm Summary of Programming Collective Intelligence (1)
    An Algorithm Summary of Programming Collective Intelligence (3)
    An Algorithm Summary of Programming Collective Intelligence (4)
  • 原文地址:https://www.cnblogs.com/Osler/p/7687204.html
Copyright © 2011-2022 走看看