zoukankan      html  css  js  c++  java
  • Tensorflow 之 TensorBoard可视化Graph和Embeddings

    • Could you try using python -m tensorboard --logdir "${MODEL_DIR}" instead? I suspect that this will fix your issue.
    • I should have written tensorboard.main instead of TensorBoard: python -m tensorboard.main --logdir "${MODEL_DIR}"
    • 但是虽然启动了6006端口,但是加载文件失败,这个问题留到后面解决,继续跟进tensorflow学习

    • 更新几个现有的demo

    如路径为:E:MyTensorBoardlogs, logs中又包含train和test。此时,TensorBoard通过读取事件文件来运行,通过在cmd 中键入命令:tensorboard --logdir=log文件路径。按照我们当前目录,若写成:
    
    tensorboard --logdir=E:MyTensorBoardlogs
    显示结果是:No scalar、No image...,然而查了几遍代码也没有问题,事件文件也没有问题。
    
    解决方法:方法一:将cmd的默认路径cd到log文件的上一层,即cd /d E:MyTensorBoard,之后等号后面直接键入log文件名即可,不需写全路径,即 tensorboard --logdir=logs。方法二:双斜杠,即tensorboard --logdir=E://MyTensorBoard//logs。最后根据得到的网址http://hostIP:6006,在chrome里打开,就可以可视化我们的图表了,幸福来的太突然
    
    

    Tensor与Graph可视化

    • Summary:所有需要在TensorBoard上展示的统计结果。
    • tf.name_scope():为Graph中的Tensor添加层级,TensorBoard会按照代码指定的层级进行展示,初始状态下只绘制最高层级的效果,点击后可展开层级看到下一层的细节。
    • tf.summary.scalar():添加标量统计结果。
    • tf.summary.histogram():添加任意shape的Tensor,统计这个Tensor的取值分布。
    • tf.summary.merge_all():添加一个操作,代表执行所有summary操作,这样可以避免人工执行每一个summary op。
    • tf.summary.FileWrite:用于将Summary写入磁盘,需要制定存储路径logdir,如果传递了Graph对象,则在Graph Visualization会显示Tensor Shape Information。执行summary op后,将返回结果传递给add_summary()方法即可。
    
    import gzip
    import struct
    import numpy as np
    from sklearn.linear_model import LogisticRegression
    from sklearn import preprocessing
    from sklearn.metrics import accuracy_score
    import tensorflow as tf
    
    
    # MNIST data is stored in binary format,
    # and we transform them into numpy ndarray objects by the following two utility functions
    def read_image(file_name):
        with gzip.open(file_name, 'rb') as f:
            buf = f.read()
            index = 0
            magic, images, rows, columns = struct.unpack_from('>IIII', buf, index)
            index += struct.calcsize('>IIII')
    
            image_size = '>' + str(images * rows * columns) + 'B'
            ims = struct.unpack_from(image_size, buf, index)
    
            im_array = np.array(ims).reshape(images, rows, columns)
            return im_array
    
    
    def read_label(file_name):
        with gzip.open(file_name, 'rb') as f:
            buf = f.read()
            index = 0
            magic, labels = struct.unpack_from('>II', buf, index)
            index += struct.calcsize('>II')
    
            label_size = '>' + str(labels) + 'B'
            labels = struct.unpack_from(label_size, buf, index)
    
            label_array = np.array(labels)
            return label_array
    
    
    print ("Start processing MNIST handwritten digits data...")
    train_x_data = read_image("MNIST_data/train-images-idx3-ubyte.gz")
    train_x_data = train_x_data.reshape(train_x_data.shape[0], -1).astype(np.float32)
    train_y_data = read_label("MNIST_data/train-labels-idx1-ubyte.gz")
    test_x_data = read_image("MNIST_data/t10k-images-idx3-ubyte.gz")
    test_x_data = test_x_data.reshape(test_x_data.shape[0], -1).astype(np.float32)
    test_y_data = read_label("MNIST_data/t10k-labels-idx1-ubyte.gz")
    
    train_x_minmax = train_x_data / 255.0
    test_x_minmax = test_x_data / 255.0
    
    # Of course you can also use the utility function to read in MNIST provided by tensorflow
    # from tensorflow.examples.tutorials.mnist import input_data
    # mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)
    # train_x_minmax = mnist.train.images
    # train_y_data = mnist.train.labels
    # test_x_minmax = mnist.test.images
    # test_y_data = mnist.test.labels
    
    # We evaluate the softmax regression model by sklearn first
    eval_sklearn = False
    if eval_sklearn:
        print ("Start evaluating softmax regression model by sklearn...")
        reg = LogisticRegression(solver="lbfgs", multi_class="multinomial")
        reg.fit(train_x_minmax, train_y_data)
        np.savetxt('coef_softmax_sklearn.txt', reg.coef_, fmt='%.6f')  # Save coefficients to a text file
        test_y_predict = reg.predict(test_x_minmax)
        print ("Accuracy of test set: %f" % accuracy_score(test_y_data, test_y_predict))
    
    eval_tensorflow = True
    batch_gradient = False
    
    
    # Summary:所有需要在TensorBoard上展示的统计结果。
    def variable_summaries(var):
        with tf.name_scope('summaries'):
            mean = tf.reduce_mean(var)
            tf.summary.scalar('mean', mean)
            stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
            tf.summary.scalar('stddev', stddev)
            tf.summary.scalar('max', tf.reduce_max(var))
            tf.summary.scalar('min', tf.reduce_min(var))
            tf.summary.histogram('histogram', var)
    
    
    if eval_tensorflow:
        print ("Start evaluating softmax regression model by tensorflow...")
        # reformat y into one-hot encoding style
        lb = preprocessing.LabelBinarizer()
        lb.fit(train_y_data)
        train_y_data_trans = lb.transform(train_y_data)
        test_y_data_trans = lb.transform(test_y_data)
    
        x = tf.placeholder(tf.float32, [None, 784])
        with tf.name_scope('weights'):
            W = tf.Variable(tf.zeros([784, 10]))
            variable_summaries(W)
        with tf.name_scope('biases'):
            b = tf.Variable(tf.zeros([10]))
            variable_summaries(b)
        with tf.name_scope('Wx_plus_b'):
            V = tf.matmul(x, W) + b
            tf.summary.histogram('pre_activations', V)
        with tf.name_scope('softmax'):
            y = tf.nn.softmax(V)
            tf.summary.histogram('activations', y)
    
        y_ = tf.placeholder(tf.float32, [None, 10])
    
        with tf.name_scope('cross_entropy'):
            loss = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
            tf.summary.scalar('cross_entropy', loss)
    
        with tf.name_scope('train'):
            optimizer = tf.train.GradientDescentOptimizer(0.5)
            train = optimizer.minimize(loss)
    
        with tf.name_scope('evaluate'):
            with tf.name_scope('correct_prediction'):
                correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
            with tf.name_scope('accuracy'):
                accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
                tf.summary.scalar('accuracy', accuracy)
    
        init = tf.global_variables_initializer()
    
        sess = tf.Session()
        sess.run(init)
    
        merged = tf.summary.merge_all()
        train_writer = tf.summary.FileWriter('tmp/train', sess.graph)
        test_writer = tf.summary.FileWriter('tmp/test')
    
        if batch_gradient:
            for step in range(300):
                sess.run(train, feed_dict={x: train_x_minmax, y_: train_y_data_trans})
                if step % 10 == 0:
                    print ("Batch Gradient Descent processing step %d" % step)
            print ("Finally we got the estimated results, take such a long time...")
        else:
            for step in range(1000):
                if step % 10 == 0:
                    summary, acc = sess.run([merged, accuracy], feed_dict={x: test_x_minmax, y_: test_y_data_trans})
                    test_writer.add_summary(summary, step)
                    print ("Stochastic Gradient Descent processing step %d accuracy=%.2f" % (step, acc))
                else:
                    sample_index = np.random.choice(train_x_minmax.shape[0], 100)
                    batch_xs = train_x_minmax[sample_index, :]
                    batch_ys = train_y_data_trans[sample_index, :]
                    summary, _ = sess.run([merged, train], feed_dict={x: batch_xs, y_: batch_ys})
                    train_writer.add_summary(summary, step)
    
        np.savetxt('coef_softmax_tf.txt', np.transpose(sess.run(W)), fmt='%.6f')  # Save coefficients to a text file
        print ("Accuracy of test set: %f" % sess.run(accuracy, feed_dict={x: test_x_minmax, y_: test_y_data_trans}))
    

    Embeddings

    • TensorBoard是TensorFlow自带的一个可视化工具,Embeddings是其中的一个功能,用于在二维或三维空间对高维数据进行探索
    # -*- coding: utf-8 -*-
    # @author: ranjiewen
    # @date: 2017-02-08
    # @description: hello world program to set up embedding projector in TensorBoard based on MNIST
    # @ref: http://yann.lecun.com/exdb/mnist/, https://www.tensorflow.org/images/mnist_10k_sprite.png
    #
    
    import numpy as np
    import tensorflow as tf
    from tensorflow.contrib.tensorboard.plugins import projector
    from tensorflow.examples.tutorials.mnist import input_data
    import os
    
    PATH_TO_MNIST_DATA = "MNIST_data"
    LOG_DIR = "emd"
    IMAGE_NUM = 10000
    
    # Read in MNIST data by utility functions provided by TensorFlow
    mnist = input_data.read_data_sets(PATH_TO_MNIST_DATA, one_hot=False)
    
    # Extract target MNIST image data
    plot_array = mnist.test.images[:IMAGE_NUM]  # shape: (n_observations, n_features)
    
    # Generate meta data
    np.savetxt(os.path.join(LOG_DIR, 'metadata.tsv'), mnist.test.labels[:IMAGE_NUM], fmt='%d')
    
    # Download sprite image
    # https://www.tensorflow.org/images/mnist_10k_sprite.png, 100x100 thumbnails
    PATH_TO_SPRITE_IMAGE = os.path.join(LOG_DIR, 'mnist_10k_sprite.png')
    
    # To visualise your embeddings, there are 3 things you need to do:
    # 1) Setup a 2D tensor variable(s) that holds your embedding(s)
    session = tf.InteractiveSession()
    embedding_var = tf.Variable(plot_array, name='embedding')
    tf.global_variables_initializer().run()
    
    # 2) Periodically save your embeddings in a LOG_DIR
    # Here we just save the Tensor once, so we set global_step to a fixed number
    saver = tf.train.Saver()
    saver.save(session, os.path.join(LOG_DIR, "model.ckpt"), global_step=0)
    
    # 3) Associate metadata and sprite image with your embedding
    # Use the same LOG_DIR where you stored your checkpoint.
    summary_writer = tf.summary.FileWriter(LOG_DIR)
    
    config = projector.ProjectorConfig()
    # You can add multiple embeddings. Here we add only one.
    embedding = config.embeddings.add()
    embedding.tensor_name = embedding_var.name
    # Link this tensor to its metadata file (e.g. labels).
    embedding.metadata_path = os.path.join(LOG_DIR, 'metadata.tsv')
    # Link this tensor to its sprite image.
    embedding.sprite.image_path = PATH_TO_SPRITE_IMAGE
    embedding.sprite.single_image_dim.extend([28, 28])
    # Saves a configuration file that TensorBoard will read during startup.
    projector.visualize_embeddings(summary_writer, config)
    
    

    官网demo

    • 注意更改文件路径
    """A simple MNIST classifier which displays summaries in TensorBoard.
    This is an unimpressive MNIST model, but it is a good example of using
    tf.name_scope to make a graph legible in the TensorBoard graph explorer, and of
    naming summary tags so that they are grouped meaningfully in TensorBoard.
    It demonstrates the functionality of every TensorBoard dashboard.
    """
    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function
    
    import argparse
    import os
    import sys
    
    import tensorflow as tf
    
    from tensorflow.examples.tutorials.mnist import input_data
    
    FLAGS = None #"F:RANJIEWENDeep_learningTensorFlowlog"
    
    
    def train():
      # Import data
      mnist = input_data.read_data_sets(FLAGS.data_dir,
                                        one_hot=True,
                                        fake_data=FLAGS.fake_data)
    
      sess = tf.InteractiveSession()
      # Create a multilayer model.
    
      # Input placeholders
      with tf.name_scope('input'):
        x = tf.placeholder(tf.float32, [None, 784], name='x-input')
        y_ = tf.placeholder(tf.float32, [None, 10], name='y-input')
    
      with tf.name_scope('input_reshape'):
        image_shaped_input = tf.reshape(x, [-1, 28, 28, 1])
        tf.summary.image('input', image_shaped_input, 10)
    
      # We can't initialize these variables to 0 - the network will get stuck.
      def weight_variable(shape):
        """Create a weight variable with appropriate initialization."""
        initial = tf.truncated_normal(shape, stddev=0.1)
        return tf.Variable(initial)
    
      def bias_variable(shape):
        """Create a bias variable with appropriate initialization."""
        initial = tf.constant(0.1, shape=shape)
        return tf.Variable(initial)
    
      def variable_summaries(var):
        """Attach a lot of summaries to a Tensor (for TensorBoard visualization)."""
        with tf.name_scope('summaries'):
          mean = tf.reduce_mean(var)
          tf.summary.scalar('mean', mean)
          with tf.name_scope('stddev'):
            stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
          tf.summary.scalar('stddev', stddev)
          tf.summary.scalar('max', tf.reduce_max(var))
          tf.summary.scalar('min', tf.reduce_min(var))
          tf.summary.histogram('histogram', var)
    
      def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
        """Reusable code for making a simple neural net layer.
        It does a matrix multiply, bias add, and then uses ReLU to nonlinearize.
        It also sets up name scoping so that the resultant graph is easy to read,
        and adds a number of summary ops.
        """
        # Adding a name scope ensures logical grouping of the layers in the graph.
        with tf.name_scope(layer_name):
          # This Variable will hold the state of the weights for the layer
          with tf.name_scope('weights'):
            weights = weight_variable([input_dim, output_dim])
            variable_summaries(weights)
          with tf.name_scope('biases'):
            biases = bias_variable([output_dim])
            variable_summaries(biases)
          with tf.name_scope('Wx_plus_b'):
            preactivate = tf.matmul(input_tensor, weights) + biases
            tf.summary.histogram('pre_activations', preactivate)
          activations = act(preactivate, name='activation')
          tf.summary.histogram('activations', activations)
          return activations
    
      hidden1 = nn_layer(x, 784, 500, 'layer1')
    
      with tf.name_scope('dropout'):
        keep_prob = tf.placeholder(tf.float32)
        tf.summary.scalar('dropout_keep_probability', keep_prob)
        dropped = tf.nn.dropout(hidden1, keep_prob)
    
      # Do not apply softmax activation yet, see below.
      y = nn_layer(dropped, 500, 10, 'layer2', act=tf.identity)
    
      with tf.name_scope('cross_entropy'):
        # The raw formulation of cross-entropy,
        #
        # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.softmax(y)),
        #                               reduction_indices=[1]))
        #
        # can be numerically unstable.
        #
        # So here we use tf.nn.softmax_cross_entropy_with_logits on the
        # raw outputs of the nn_layer above, and then average across
        # the batch.
        diff = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)
        with tf.name_scope('total'):
          cross_entropy = tf.reduce_mean(diff)
      tf.summary.scalar('cross_entropy', cross_entropy)
    
      with tf.name_scope('train'):
        train_step = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(
            cross_entropy)
    
      with tf.name_scope('accuracy'):
        with tf.name_scope('correct_prediction'):
          correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
        with tf.name_scope('accuracy'):
          accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
      tf.summary.scalar('accuracy', accuracy)
    
      # Merge all the summaries and write them out to
      # /tmp/tensorflow/mnist/logs/mnist_with_summaries (by default)# log/train or log/test
      merged = tf.summary.merge_all()
      train_writer = tf.summary.FileWriter(FLAGS.log_dir + '/train', sess.graph)
      test_writer = tf.summary.FileWriter(FLAGS.log_dir + '/test')
      tf.global_variables_initializer().run()
    
      # Train the model, and also write summaries.
      # Every 10th step, measure test-set accuracy, and write test summaries
      # All other steps, run train_step on training data, & add training summaries
    
      def feed_dict(train):
        """Make a TensorFlow feed_dict: maps data onto Tensor placeholders."""
        if train or FLAGS.fake_data:
          xs, ys = mnist.train.next_batch(100, fake_data=FLAGS.fake_data)
          k = FLAGS.dropout
        else:
          xs, ys = mnist.test.images, mnist.test.labels
          k = 1.0
        return {x: xs, y_: ys, keep_prob: k}
    
      for i in range(FLAGS.max_steps):
        if i % 10 == 0:  # Record summaries and test-set accuracy
          summary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False))
          test_writer.add_summary(summary, i)
          print('Accuracy at step %s: %s' % (i, acc))
        else:  # Record train set summaries, and train
          if i % 100 == 99:  # Record execution stats
            run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
            run_metadata = tf.RunMetadata()
            summary, _ = sess.run([merged, train_step],
                                  feed_dict=feed_dict(True),
                                  options=run_options,
                                  run_metadata=run_metadata)
            train_writer.add_run_metadata(run_metadata, 'step%03d' % i)
            train_writer.add_summary(summary, i)
            print('Adding run metadata for', i)
          else:  # Record a summary
            summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True))
            train_writer.add_summary(summary, i)
      train_writer.close()
      test_writer.close()
    
    
    def main(_):
      if tf.gfile.Exists(FLAGS.log_dir):
        tf.gfile.DeleteRecursively(FLAGS.log_dir)
      tf.gfile.MakeDirs(FLAGS.log_dir)
      train()
    
    
    if __name__ == '__main__':
      parser = argparse.ArgumentParser()
      parser.add_argument('--fake_data', nargs='?', const=True, type=bool,
                          default=False,
                          help='If true, uses fake data for unit testing.')
      parser.add_argument('--max_steps', type=int, default=1000,
                          help='Number of steps to run trainer.')
      parser.add_argument('--learning_rate', type=float, default=0.001,
                          help='Initial learning rate')
      parser.add_argument('--dropout', type=float, default=0.9,
                          help='Keep probability for training dropout.')
      # parser.add_argument(
      #     '--data_dir',
      #     type=str,
      #     default=os.path.join(os.getenv('TEST_TMPDIR', '/tmp'),
      #                          'tensorflow/mnist/input_data'),
      #     help='Directory for storing input data')
      # parser.add_argument(
      #     '--log_dir',
      #     type=str,
      #     default=os.path.join(os.getenv('TEST_TMPDIR', '/tmp'),
      #                          'tensorflow/mnist/logs/mnist_with_summaries'),
      #     help='Summaries log directory')
    
      parser.add_argument(
          '--data_dir',
          type=str,
          default=os.path.join(os.getenv('TEST_TMPDIR', 'F:RANJIEWENDeep_learningTensorFlowMNIST_data')),
          help='Directory for storing input data')
      parser.add_argument(
          '--log_dir',
          type=str,
          default=os.path.join(os.getenv('TEST_TMPDIR', 'F:RANJIEWENDeep_learningTensorFlowlog')),
          help='Summaries log directory')
      FLAGS, unparsed = parser.parse_known_args()
      tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
    
    
  • 相关阅读:
    ThreadLocal
    mysql
    heroku 的用法
    Redis
    disruptor
    RxJava
    TCP
    虚拟机的安装及配置等
    k8s
    Ribbon源
  • 原文地址:https://www.cnblogs.com/ranjiewen/p/7510031.html
Copyright © 2011-2022 走看看