zoukankan      html  css  js  c++  java
  • TensorFlow2.0初体验

    TF2.0默认为动态图,即eager模式。意味着TF能像Pytorch一样不用在session中才能输出中间参数值了,那么动态图和静态图毕竟是有区别的,tf2.0也会有写法上的变化。不过值得吐槽的是,tf2.0启动速度仍然比Pytorch慢的多。

    操作被记录在磁带中(tape)
    这是一个关键的变化。在TF0.x到TF1.X时代,操作(operation)被加入到Graph中。但现在,操作会被梯度带记录,我们要做的仅仅是让前向传播和计算损失的过程发生在梯度带的上下文管理器中。

     with tf.GradientTape() as tape:
            logits = mnist_model(images, training=True)
            loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
                #  loss_value 必须在tape内部
    
        grads = tape.gradient(loss_value, mnist_model.variables)
        optimizer.apply_gradients(zip(grads, mnist_model.variables),
                                global_step=tf.train.get_or_create_global_step())

    注意到这里的tape.gradient用来计算损失函数和model参数的导数。我们在之前的版本要么使用优化器的minimize功能,要么使用tf.gradients来计算导数。在eager模式,tf.gradients不能使用。

    # coding: utf-8
    
    # pytorch: loss.backward(), optimizer.step()完成梯度计算和参数更新;
    # tf2.0通过: grads = tape.gradient(), optimizer.apply_gradients()来实现!
    # reference: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/tensorflow_v2/notebooks/3_NeuralNetworks/convolutional_network.ipynb
    
    from __future__ import absolute_import,division,print_function
    
    import tensorflow as tf
    from tensorflow.keras import Model, layers
    import numpy as np
    
    
    # MNIST dataset parameters.
    num_classes = 10 # total classes (0-9 digits).
    
    # Training parameters.
    learning_rate = 0.001
    training_steps = 200
    batch_size = 128
    display_step = 10
    
    # Network parameters.
    conv1_filters = 32 # number of filters for 1st conv layer.
    conv2_filters = 64 # number of filters for 2nd conv layer.
    fc1_units = 1024 # number of neurons for 1st fully-connected layer.
    
    
    # Prepare MNIST data.
    from tensorflow.keras.datasets import mnist
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    # Convert to float32.
    x_train, x_test = np.array(x_train, np.float32), np.array(x_test, np.float32)
    # Normalize images value from [0, 255] to [0, 1].
    x_train, x_test = x_train / 255., x_test / 255.
    
    # Use tf.data API to shuffle and batch data.
    train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
    train_data = train_data.repeat().shuffle(5000).batch(batch_size).prefetch(1)
    
    
    # Create TF Model.
    class ConvNet(Model):
        # Set layers.
        def __init__(self):
            super(ConvNet, self).__init__()
            # Convolution Layer with 32 filters and a kernel size of 5.
            self.conv1 = layers.Conv2D(32, kernel_size=5, activation=tf.nn.relu)
            # Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
            self.maxpool1 = layers.MaxPool2D(2, strides=2)
    
            # Convolution Layer with 64 filters and a kernel size of 3.
            self.conv2 = layers.Conv2D(64, kernel_size=3, activation=tf.nn.relu)
            # Max Pooling (down-sampling) with kernel size of 2 and strides of 2.
            self.maxpool2 = layers.MaxPool2D(2, strides=2)
    
            # Flatten the data to a 1-D vector for the fully connected layer.
            self.flatten = layers.Flatten()
    
            # Fully connected layer.
            self.fc1 = layers.Dense(1024)
            # Apply Dropout (if is_training is False, dropout is not applied).
            self.dropout = layers.Dropout(rate=0.5)
    
            # Output layer, class prediction.
            self.out = layers.Dense(num_classes)
    
        # Set forward pass.
        def call(self, x, is_training=False):
            x = tf.reshape(x, [-1, 28, 28, 1])
            x = self.conv1(x)
            x = self.maxpool1(x)
            x = self.conv2(x)
            x = self.maxpool2(x)
            x = self.flatten(x)
            x = self.fc1(x)
            x = self.dropout(x, training=is_training)
            x = self.out(x)
            if not is_training:
                # tf cross entropy expect logits without softmax, so only
                # apply softmax when not training.
                x = tf.nn.softmax(x)
            return x
    
    # Build neural network model.
    conv_net = ConvNet()
    
    
    # Cross-Entropy Loss.
    # Note that this will apply 'softmax' to the logits.
    def cross_entropy_loss(x, y):
        # Convert labels to int 64 for tf cross-entropy function.
        y = tf.cast(y, tf.int64)
        # Apply softmax to logits and compute cross-entropy.
        loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=x)
        # Average loss across the batch.
        return tf.reduce_mean(loss)
    
    # Accuracy metric.
    def accuracy(y_pred, y_true):
        # Predicted class is the index of highest score in prediction vector (i.e. argmax).
        correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
        return tf.reduce_mean(tf.cast(correct_prediction, tf.float32), axis=-1)
    
    # Stochastic gradient descent optimizer.
    optimizer = tf.optimizers.Adam(learning_rate)
    
    
    # Optimization process.
    def run_optimization(x, y):
        # Wrap computation inside a GradientTape for automatic differentiation.
        with tf.GradientTape() as g:
            # Forward pass.
            pred = conv_net(x, is_training=True)
            # Compute loss.
            loss = cross_entropy_loss(pred, y)
    
        # Variables to update, i.e. trainable variables.
        trainable_variables = conv_net.trainable_variables
    
        # Compute gradients.
        gradients = g.gradient(loss, trainable_variables)
    
        # Update W and b following gradients.
        optimizer.apply_gradients(zip(gradients, trainable_variables))
    
    
    # Run training for the given number of steps.
    for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):
        # Run the optimization to update W and b values.
        run_optimization(batch_x, batch_y)
    
        if step % display_step == 0:
            pred = conv_net(batch_x)
            loss = cross_entropy_loss(pred, batch_y)
            acc = accuracy(pred, batch_y)
            print("step: %i, loss: %f, accuracy: %f" % (step, loss, acc))
    
    
    
    # Test model on validation set.
    pred = conv_net(x_test)
    print("Test Accuracy: %f" % accuracy(pred, y_test))

    注意:

    - TF2.0默认为动态图,没有回话Session了;

    - 代码中注意 `for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):` 的使用;

    - Pycharm中注意:from tensorflow.keras import Model, layers ;跳不进去查看内部实现;用面向对象的思想写网络结构;init,build,call等函数实现;

     

    Reference:

  • 相关阅读:
    [国家集训队]数颜色 / 维护队列
    【模板】二逼平衡树(线段树+平衡树)
    jenkins实现接口自动化持续集成(python+pytest+ Allure+git)
    Locust快速上手指南
    缓解多分类的样本不均衡问题
    PlayStation@4功能介绍及测试应用
    APP专项测试-弱网测试
    游戏自动化测试-局内战斗
    Windows下JMeter分布式压测环境搭建
    基于simhash的文本去重原理
  • 原文地址:https://www.cnblogs.com/ranjiewen/p/10861265.html
Copyright © 2011-2022 走看看