zoukankan      html  css  js  c++  java
  • tensorFlow入门实践(一)

    首先应用TensorFlow完成一个线性回归,了解TensorFlow的数据类型和运行机制。

    import tensorflow as tf
    import numpy as np
    import matplotlib.pyplot as plt
    
    rng = np.random
    
    # 参数设定
    learning_rate = 0.01
    training_epochs = 10000
    display_step = 50 #50代display一次
    
    # 训练数据
    train_X = np.asarray([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, 2.167, 7.042,
                          10.791, 5.313, 7.997, 5.654, 9.27, 3.1])
    train_Y = np.array([1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, 1.221,
                        2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3])
    n_samples = train_X.shape[0] #维度
    
    # 设置placeholder
    X = tf.placeholder("float")
    Y = tf.placeholder("float")
    
    # 设置模型的权重和偏置
    W = tf.Variable(rng.randn(), name="weight")
    b = tf.Variable(rng.randn(), name="bias")
    # <tf.Variable 'weight:0' shape=() dtype=float32_ref>
    # <tf.Variable 'bias:0' shape=() dtype=float32_ref>
    
    # 设置线性回归方程
    pred = tf.add(tf.multiply(X, W), b)
    
    # 设置cost为均方差
    cost = tf.reduce_sum(tf.pow(pred-Y, 2)) / (2 * n_samples)
    # 梯度下降
    # 注意,minimize() 可以自动修正W和b,因为默认设置Variables的trainable=True
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
    
    # 初始化所有的variables
    init = tf.global_variables_initializer()
    
    with tf.Session() as sess:
        sess.run(init)
        # 开始训练
        # 灌入所有训练数据
        for epoch in range(training_epochs):
            for (x, y) in zip(train_X, train_Y): # zip 搞成字典
                sess.run(optimizer, feed_dict={X: x, Y: y})
    
            # 打印出每次迭代的log日志
            if (epoch+1) % display_step == 0:
                c = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
                print ("Epoch:%04d cost=" %(epoch+1,), '{:.9f}'.format(c), "W=", sess.run(W), "b=", sess.run(b))
    
        print("Optimization Finished!")
        training_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
        print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b))
    
        # 作图
        # figure 1
        plt.figure()
        plt.plot(train_X, train_Y, 'ro', label='Original data')
        plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
        plt.legend()
    
        # 测试样本
        test_X = np.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1])
        test_Y = np.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03])
    
        print("Testing...(Mean square loss Comparison)")
        testing_cost = sess.run(
            tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * test_X.shape[0]),
            feed_dict={X: test_X, Y: test_Y}) # same function as cost above
        print("Testing cost=", testing_cost)
        print("Absolute mean square loss difference:", abs(training_cost - testing_cost))
    
        # figure 2
        plt.figure()
        plt.plot(test_X, test_Y, 'bo', label='Testing data')
        plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')
        plt.legend()
        plt.show()

    在训练完成后,用test数据集进行测试,训练和测试得到的数据结果如下:

    Optimization Finished!
    Training cost= 0.07699074 W= 0.24960834 b= 0.80136055
    Testing...(Mean square loss Comparison)
    Testing cost= 0.07910849
    Absolute mean square loss difference: 0.002117753
    

    训练和测试得到的结果图像如下:

    接下来,做一个简单的逻辑回归进行手写数字识别例子,来进一步感受一下TensorFlow应用中,计算图的建立和工作机制。希望能在示例的实现中体会其思路,慢慢融会贯通。代码如下:

    import tensorflow as  tf
    
    # 加载mnist数据集
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets("/temp/data", one_hot=True)
    print(mnist)
    
    # 设置参数
    learning_rate = 0.01
    training_epochs = 50
    batch_size = 100    # 每次放入一定批量的数据放入模型中去训练
    display_step = 5
    
    # tf Graph的输入
    x = tf.placeholder(tf.float32, [None, 784])  # mnist data image of shape 28*28
    y = tf.placeholder(tf.float32, [None, 10])  # 0-9 digits recognition => 10
    
    # 设置权重和偏置
    W = tf.Variable(tf.zeros([784, 10]))
    b = tf.Variable(tf.zeros([10]))
    
    # 设定运行模型
    pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
    
    # 设定cost function为 cross entropy 交叉熵
    cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
    # 梯度下降
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
    
    # 初始化权重
    init = tf.global_variables_initializer()
    
    # 开始训练
    with tf.Session() as sess:
        sess.run(init)
    
        for epoch in range(training_epochs):
            avg_cost = 0.0
            total_batch = int(mnist.train.num_examples/batch_size)
            # 遍历每个batch
            for i in  range(total_batch):
                batch_xs, batch_ys = mnist.train.next_batch(batch_size)
                # 把每个batch的数据放进去训练
                _, c = sess.run([optimizer, cost], feed_dict={x: batch_xs, y: batch_ys})
    
                # 计算平均损失
                avg_cost += c / total_batch
            # 展示每次迭代的日志
            if (epoch+1) % display_step == 0:
                print("Epoch:", (epoch+1), "cost=", "{:.9f}".format(avg_cost))
    
        print("Optimization Finished!")
    
        # 测试模型
        correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
        # 计算3000个样本的准确率
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
        print("Accuracy:", accuracy.eval({x:mnist.test.images[:3000], y: mnist.test.labels[:3000]}))

    运行后得到结果如下:

    Epoch: 5 cost= 0.465533400
    Epoch: 10 cost= 0.392410529
    Epoch: 15 cost= 0.362706895
    Epoch: 20 cost= 0.345443823
    Epoch: 25 cost= 0.333700618
    Epoch: 30 cost= 0.325048538
    Epoch: 35 cost= 0.318335179
    Epoch: 40 cost= 0.312850520
    Epoch: 45 cost= 0.308320769
    Epoch: 50 cost= 0.304484692
    Optimization Finished!
    Accuracy: 0.896
    

    总结:

    一、关于梯度下降
    1.根据公式直接计算
    2.自动求导
    由于手动求导方法有两个缺陷:
    (1)深度神经网络中公式较长
    (2)计算效率较低
    可以采用以下命令进行自动求导:

    gradient = tf.gradients(mse, [theta])[0]

    (1)op,损失函数,这里是mse,即均方误差
    (2)variable lists,变量列表即权重,这里是theta的值
    3. 更简便的方法:使用Optimizer

    optimizer = tf.train.GradientDecentOptimizer(learning_rate = learning_rate)
    train_op = optimizer.minimize(mse)

    还有很多其他优化器,例如收敛更快的MomentumOptimizer优化器等。

    梯度下降中传输数据的方式
    (1)mini-batch小批量灌入数据,如手写数字识别中所用的方式
           (用batch_size进行分割,可以通过如下函数完成取批量数据)
    (2)方法:使用占位符placeholder

    def fetch_batch(epoch, batch_index, batch_size):
        np.random.seed(epoch * n_batches + batch_index)
        indices = np.random.randint(m, size=batch_size)
        x_batch = data[indices]
        y_batch = housing.target.reshape(-1, 1)[indices]
        return X_batch, y_batch

    2. 模型保存和恢复
    在训练完成后,我们经常需要保存模型,方便随时进行预测。
    有时在训练过程中,我们也希望将训练的中间结果保存下来。
    在TensorFlow中实现模型的保存和恢复还是非常简单便捷的:
    (1)模型保存:
    在创建图阶段创建一个Saver的结点,在执行阶段需要保存模型的地方调用Save()函数即可。

    saver = tf.train.Saver()

    完成后再启动Session,在中间某一步需要保存,我们就调用saver函数,保存在相对应的路径下。

    save_path = saver.save(sess, "/my_model.ckpt")

    (2)模型恢复:
    在构建图的结尾创建一个Saver结点,在执行阶段的开始用restore函数进行模型恢复

    saver.restore(sess, "/tem/my_model_final.ckpt")

    更多关于模型保存和恢复的情况和方法请参考下面一篇博客:

    http://blog.csdn.net/liangyihuai/article/details/78515913

  • 相关阅读:
    super关键字
    aspcms 留言 搜索
    aspcms标签
    随机添加一个Class,Class提前写好
    python实现进度条
    linux 下获取文件名的md5值
    linux下 批量压缩与批量解压
    linux下批量新建/删除 文件或目录
    python——pip导出导入安装包
    python Scrapy爬虫框架
  • 原文地址:https://www.cnblogs.com/veraLin/p/10273921.html
Copyright © 2011-2022 走看看