zoukankan      html  css  js  c++  java
  • TensorFlow tutorial

    代码示例来自https://github.com/aymericdamien/TensorFlow-Examples

    • tensorflow先定义运算图,在run的时候才会进行真正的运算。
    • run之前需要先建立一个session
    • 常量用constant 如a = tf.constant(2)
    • 变量用placeholder 需要指定类型 如a = tf.placeholder(tf.int16)

    矩阵相乘

    matrix1 = tf.constant([[3., 3.]])  #1*2矩阵
    matrix2 = tf.constant([[2.],[2.]]) #2*1矩阵
    product = tf.matmul(matrix1, matrix2) #矩阵相乘得到1*1矩阵
    with tf.Session() as sess:
        result = sess.run(product)   #result类型为ndarray
        print(result)
        # ==> [[ 12.]]
    
    
    '''
    Basic Operations example using TensorFlow library.
    Author: Aymeric Damien
    Project: https://github.com/aymericdamien/TensorFlow-Examples/
    '''
    
    from __future__ import print_function
    
    import tensorflow as tf
    
    # Basic constant operations
    # The value returned by the constructor represents the output
    # of the Constant op.
    a = tf.constant(2)
    b = tf.constant(3)
    
    # Launch the default graph.
    with tf.Session() as sess:
        print("a=2, b=3")
        print("Addition with constants: %i" % sess.run(a+b))
        print("Multiplication with constants: %i" % sess.run(a*b))
    
    # Basic Operations with variable as graph input
    # The value returned by the constructor represents the output
    # of the Variable op. (define as input when running session)
    # tf Graph input
    a = tf.placeholder(tf.int16)
    b = tf.placeholder(tf.int16)
    
    # Define some operations
    add = tf.add(a, b)
    mul = tf.multiply(a, b)
    
    # Launch the default graph.
    with tf.Session() as sess:
        # Run every operation with variable input
        print("Addition with variables: %i" % sess.run(add, feed_dict={a: 2, b: 3}))
        print("Multiplication with variables: %i" % sess.run(mul, feed_dict={a: 2, b: 3}))
    
    
    # ----------------
    # More in details:
    # Matrix Multiplication from TensorFlow official tutorial
    
    # Create a Constant op that produces a 1x2 matrix.  The op is
    # added as a node to the default graph.
    #
    # The value returned by the constructor represents the output
    # of the Constant op.
    matrix1 = tf.constant([[3., 3.]])
    
    # Create another Constant that produces a 2x1 matrix.
    matrix2 = tf.constant([[2.],[2.]])
    
    # Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.
    # The returned value, 'product', represents the result of the matrix
    # multiplication.
    product = tf.matmul(matrix1, matrix2)
    
    # To run the matmul op we call the session 'run()' method, passing 'product'
    # which represents the output of the matmul op.  This indicates to the call
    # that we want to get the output of the matmul op back.
    #
    # All inputs needed by the op are run automatically by the session.  They
    # typically are run in parallel.
    #
    # The call 'run(product)' thus causes the execution of threes ops in the
    # graph: the two constants and matmul.
    #
    # The output of the op is returned in 'result' as a numpy `ndarray` object.
    with tf.Session() as sess:
        result = sess.run(product)
        print(result)
        # ==> [[ 12.]]
    

    eager api

    详细解释参见https://www.zhihu.com/question/67471378
    之前说过tensorflow是先定义运算图,在session.run的时候才会真正做运算.
    tensorflow推出了eager api.使得tf(tensorflow简称)中的函数可以像我们熟知的普通函数一样,调用后立刻可以得到结果,更方便调试.
    坏处是和之前的有些代码不兼容.比如和Basic1里的示例代码就无法兼容.

    • tfe.enable_eager_execution() 开启eager模式要放在代码最前面
    '''
    Basic introduction to TensorFlow's Eager API.
    
    Author: Aymeric Damien
    Project: https://github.com/aymericdamien/TensorFlow-Examples/
    
    What is Eager API?
    " Eager execution is an imperative, define-by-run interface where operations are
    executed immediately as they are called from Python. This makes it easier to
    get started with TensorFlow, and can make research and development more
    intuitive. A vast majority of the TensorFlow API remains the same whether eager
    execution is enabled or not. As a result, the exact same code that constructs
    TensorFlow graphs (e.g. using the layers API) can be executed imperatively
    by using eager execution. Conversely, most models written with Eager enabled
    can be converted to a graph that can be further optimized and/or extracted
    for deployment in production without changing code. " - Rajat Monga
    
    '''
    from __future__ import absolute_import, division, print_function
    
    import numpy as np
    import tensorflow as tf
    import tensorflow.contrib.eager as tfe
    
    # Set Eager API
    print("Setting Eager mode...")
    tfe.enable_eager_execution()
    
    # Define constant tensors
    print("Define constant tensors")
    a = tf.constant(2)
    print("a = %i" % a)
    b = tf.constant(3)
    print("b = %i" % b)
    
    # Run the operation without the need for tf.Session
    print("Running operations, without tf.Session")
    c = a + b
    print("a + b = %i" % c)
    d = a * b
    print("a * b = %i" % d)
    
    
    # Full compatibility with Numpy
    print("Mixing operations with Tensors and Numpy Arrays")
    
    # Define constant tensors
    a = tf.constant([[2., 1.],
                     [1., 0.]], dtype=tf.float32)
    print("Tensor:
     a = %s" % a)
    b = np.array([[3., 0.],
                  [5., 1.]], dtype=np.float32)
    print("NumpyArray:
     b = %s" % b)
    
    # Run the operation without the need for tf.Session
    print("Running operations, without tf.Session")
    
    c = a + b
    print("a + b = %s" % c)
    
    d = tf.matmul(a, b)
    print("a * b = %s" % d)
    
    print("Iterate through Tensor 'a':")
    for i in range(a.shape[0]):
        for j in range(a.shape[1]):
            print(a[i][j])
    
    

    卷积神经网络

    阅读这部分之前先要对卷积神经网络有多了解.参见https://www.cnblogs.com/sdu20112013/p/10149529.html
    神经网络的参数

    • 输入层 样本X的维度 图片为28*28->784
    • 全连接层的输出 图片种类,0到数字9共10种
    • dropout = 0.25 # Dropout, probability to drop a unit 为了防止过拟合,丢弃某些神经元的输出

    训练参数

    • 学习率
    • batch_size 梯度求解参考的样本数量
    • num_steps 可能是只迭代的轮次?

    构建一个CNN

    • 卷积层,池化层,完成特征提取.
    • 卷积层 32个filter filter的size是5*5 激活函数relu
    • 池化层 2个filter filter的size是2*2
    # Convolution Layer with 32 filters and a kernel size of 5
    conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
    # Max Pooling (down-sampling) with strides of 2 and kernel size of 2
    conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
    
    • 卷积层,池化层,进一步提取特征
    # Convolution Layer with 64 filters and a kernel size of 3
    conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
    # Max Pooling (down-sampling) with strides of 2 and kernel size of 2
    conv2 = tf.layers.max_pooling2d(conv2, 2, 2)
    
    • 全连接层,完成分类
    # Flatten the data to a 1-D vector for the fully connected layer
    #全连接层接收的是个M*N的矩阵 flatten:拍平.可以理解为把一摞叠起来的矩阵拍平
    fc1 = tf.contrib.layers.flatten(conv2)
    
    # Fully connected layer (in tf contrib folder for now)
    #全连接层有1024个神经元
    fc1 = tf.layers.dense(fc1, 1024)
    # Apply Dropout (if is_training is False, dropout is not applied)
    #对训练集,要对部分神经元做dropout,以引入非线性.只有训练集才会dropout
    fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
    
    # Output layer, class prediction
    #对全连接层输出做分类
    out = tf.layers.dense(fc1, n_classes)
    

    模型构建好了,现在需要告诉我们的模型损失函数有关的信息,这样模型才能够知道如何求梯度,并进而更新神经元之间的权重信息.
    tensorflow要求我们定一个Estimator

    • 损失函数定义
    • 最优化算法定义
    • 模型准确度定义
    • TF Estimators requires to return a EstimatorSpec, that specify the different ops for training, evaluating,
    logits_train = conv_net(features, num_classes, dropout, reuse=False,is_training=True)
    loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_train, labels=tf.cast(labels, dtype=tf.int32)))#损失函数定义
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)#最优化算法 也可以用小批量梯度下降法SGD
    train_op = optimizer.minimize(loss_op,global_step=tf.train.get_global_step())#最优化目标:使得loss最小
    

    至此,我们已经完成了模型的创建,下面就是把数据转换成合适的格式喂给模型,进行训练.

    # Define the input function for training
    input_fn = tf.estimator.inputs.numpy_input_fn(
    x={'images': mnist.train.images}, y=mnist.train.labels,
    batch_size=batch_size, num_epochs=None, shuffle=True)
    # Train the Model
    model.train(input_fn, steps=num_steps)
    
    # Evaluate the Model
    # Define the input function for evaluating
    input_fn = tf.estimator.inputs.numpy_input_fn(
    x={'images': mnist.test.images}, y=mnist.test.labels,
    batch_size=batch_size, shuffle=False)
    # Use the Estimator 'evaluate' method
    e = model.evaluate(input_fn)
    
    
    """ Convolutional Neural Network.
    
    Build and train a convolutional neural network with TensorFlow.
    This example is using the MNIST database of handwritten digits
    (http://yann.lecun.com/exdb/mnist/)
    
    This example is using TensorFlow layers API, see 'convolutional_network_raw' 
    example for a raw implementation with variables.
    
    Author: Aymeric Damien
    Project: https://github.com/aymericdamien/TensorFlow-Examples/
    """
    from __future__ import division, print_function, absolute_import
    
    # Import MNIST data
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets("/tmp/data/", one_hot=False)
    
    import tensorflow as tf
    
    # Training Parameters
    learning_rate = 0.001
    num_steps = 2000
    batch_size = 128
    
    # Network Parameters
    num_input = 784 # MNIST data input (img shape: 28*28)
    num_classes = 10 # MNIST total classes (0-9 digits)
    dropout = 0.25 # Dropout, probability to drop a unit
    
    
    # Create the neural network
    def conv_net(x_dict, n_classes, dropout, reuse, is_training):
        # Define a scope for reusing the variables
        with tf.variable_scope('ConvNet', reuse=reuse):
            # TF Estimator input is a dict, in case of multiple inputs
            x = x_dict['images']
    
            # MNIST data input is a 1-D vector of 784 features (28*28 pixels)
            # Reshape to match picture format [Height x Width x Channel]
            # Tensor input become 4-D: [Batch Size, Height, Width, Channel]
            x = tf.reshape(x, shape=[-1, 28, 28, 1])
    
            # Convolution Layer with 32 filters and a kernel size of 5
            conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
            # Max Pooling (down-sampling) with strides of 2 and kernel size of 2
            conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
    
            # Convolution Layer with 64 filters and a kernel size of 3
            conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
            # Max Pooling (down-sampling) with strides of 2 and kernel size of 2
            conv2 = tf.layers.max_pooling2d(conv2, 2, 2)
    
            # Flatten the data to a 1-D vector for the fully connected layer
            fc1 = tf.contrib.layers.flatten(conv2)
            print("fcl shape",fc1.shape)
    
            # Fully connected layer (in tf contrib folder for now)
            fc1 = tf.layers.dense(fc1, 1024)
            # Apply Dropout (if is_training is False, dropout is not applied)
            fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
    
            # Output layer, class prediction
            out = tf.layers.dense(fc1, n_classes)
    
        return out
    
    
    # Define the model function (following TF Estimator Template)
    def model_fn(features, labels, mode):
        # Build the neural network
        # Because Dropout have different behavior at training and prediction time, we
        # need to create 2 distinct computation graphs that still share the same weights.
        logits_train = conv_net(features, num_classes, dropout, reuse=False,
                                is_training=True)
        print("logits_train.shape",logits_train.shape)
        logits_test = conv_net(features, num_classes, dropout, reuse=True,
                               is_training=False)
    
        # Predictions
        pred_classes = tf.argmax(logits_test, axis=1)
        pred_probas = tf.nn.softmax(logits_test)
    
        # If prediction mode, early return
        if mode == tf.estimator.ModeKeys.PREDICT:
            return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)
    
            # Define loss and optimizer
        loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
            logits=logits_train, labels=tf.cast(labels, dtype=tf.int32)))
        optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
        train_op = optimizer.minimize(loss_op,
                                      global_step=tf.train.get_global_step())
    
        # Evaluate the accuracy of the model
        acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes)
    
        # TF Estimators requires to return a EstimatorSpec, that specify
        # the different ops for training, evaluating, ...
        estim_specs = tf.estimator.EstimatorSpec(
            mode=mode,
            predictions=pred_classes,
            loss=loss_op,
            train_op=train_op,
            eval_metric_ops={'accuracy': acc_op})
    
        return estim_specs
    
    # Build the Estimator
    model = tf.estimator.Estimator(model_fn)
    
    # Define the input function for training
    input_fn = tf.estimator.inputs.numpy_input_fn(
        x={'images': mnist.train.images}, y=mnist.train.labels,
        batch_size=batch_size, num_epochs=None, shuffle=True)
    # Train the Model
    model.train(input_fn, steps=num_steps)
    
    # Evaluate the Model
    # Define the input function for evaluating
    input_fn = tf.estimator.inputs.numpy_input_fn(
        x={'images': mnist.test.images}, y=mnist.test.labels,
        batch_size=batch_size, shuffle=False)
    # Use the Estimator 'evaluate' method
    e = model.evaluate(input_fn)
    
    print("Testing Accuracy:", e['accuracy'])
    

    顺便,说说我所理解的神经网络,"神经元"听着玄乎,其实每一个神经元就是y=ax+b而已,a,x都是矩阵. 神经元之间是有权重关系的,前一个神经元的输出作为下一个神经元的输入,赋以权重w. 通过这样的多层神经元,尽管每个神经元都做得是一个线性变化,(当然为了非线性会引入relu/sigmoid等),组合起来确取得了模拟非线性的效果,利用反向传播算法,更新整个网络中的神经元之间的权重w.达到使误差最小. 本质上神经网络就是求矩阵w的运算.

  • 相关阅读:
    微信“为盲胞读书”项目上线“团体领读”新功能
    神秘代码让iPhone微信闪退的解决方法
    [腾讯首季业绩数据]微信支付用户数持续上升
    [民间调查]小学生微信使用情况的调查 90%小学高年级学生用微信
    O2O模式成功案例分享 汲取精华化为己用
    太原警方通过微博提醒您手机丢失如何保微信安全
    百度富媒体展示允许自定义站点Logo/简介等
    网页出现scanstyles does nothing in Webkit / Mozilla的解决方法
    安卓微信新版内测 可分享小视频/可设微信字体大小
    微信电脑版微信1.1 for Windows更新 可@人/转发撤回消息/可播小视频
  • 原文地址:https://www.cnblogs.com/sdu20112013/p/10421287.html
Copyright © 2011-2022 走看看