zoukankan      html  css  js  c++  java
  • TensorFlow入门示例教程

    本部分的代码目前都是基于GitHub大佬非常详细的TensorFlow的教程上,首先给出链接:

    https://github.com/aymericdamien/TensorFlow-Examples/

    本人对其中部分代码做了注释和中文翻译,会持续更新,目前包括:

      1. 传统多层神经网络用语MNIST数据集分类(代码讲解,翻译)

    1. 传统多层神经网络用语MNIST数据集分类(代码讲解,翻译)

     

      1 """ Neural Network.
      2 
      3 A 2-Hidden Layers Fully Connected Neural Network (a.k.a Multilayer Perceptron)
      4 implementation with TensorFlow. This example is using the MNIST database
      5 of handwritten digits (http://yann.lecun.com/exdb/mnist/).
      6 
      7 Links:
      8     [MNIST Dataset](http://yann.lecun.com/exdb/mnist/).
      9 
     10 Author: Aymeric Damien
     11 Project: https://github.com/aymericdamien/TensorFlow-Examples/
     12 """
     13 
     14 from __future__ import print_function
     15 
     16 # Import MNIST data
     17 # 导入mnist数据集
     18 from tensorflow.examples.tutorials.mnist import input_data
     19 mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
     20 
     21 # 导入tf
     22 import tensorflow as tf
     23 
     24 # Parameters
     25 # 设定各种超参数
     26 learning_rate = 0.1 # 学习率
     27 num_steps = 500   # 训练500次
     28 batch_size = 128  # 每批次取128个样本训练
     29 display_step = 100  # 每训练100步显示一次
     30 
     31 # Network Parameters
     32 # 设定网络的超参数
     33 n_hidden_1 = 256 # 1st layer number of neurons
     34 n_hidden_2 = 256 # 2nd layer number of neurons
     35 num_input = 784 # MNIST data input (img shape: 28*28)
     36 num_classes = 10 # MNIST total classes (0-9 digits)
     37 
     38 # tf Graph input
     39 # tf图的输入,因为不知道到底输入大小是多少,因此设定占位符
     40 X = tf.placeholder("float", [None, num_input])
     41 Y = tf.placeholder("float", [None, num_classes])
     42 
     43 # Store layers weight & bias
     44 # 初始化w和b
     45 weights = {
     46     'h1': tf.Variable(tf.random_normal([num_input, n_hidden_1])),
     47     'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
     48     'out': tf.Variable(tf.random_normal([n_hidden_2, num_classes]))
     49 }
     50 biases = {
     51     'b1': tf.Variable(tf.random_normal([n_hidden_1])),
     52     'b2': tf.Variable(tf.random_normal([n_hidden_2])),
     53     'out': tf.Variable(tf.random_normal([num_classes]))
     54 }
     55 
     56 
     57 # Create model
     58 # 创建模型
     59 def neural_net(x):
     60     # Hidden fully connected layer with 256 neurons
     61     # 隐藏层1,全连接了256个神经元
     62     layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
     63     # Hidden fully connected layer with 256 neurons
     64     # 隐藏层2,全连接了256个神经元
     65     layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
     66     # Output fully connected layer with a neuron for each class
     67     # 最后作为输出的全连接层,对每一分类连接一个神经元
     68     out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
     69     return out_layer
     70 
     71 # Construct model
     72 # 开启模型
     73 # 输入数据X,得到得分向量logits
     74 logits = neural_net(X)
     75 # 用softmax分类器将得分向量转变成概率向量
     76 prediction = tf.nn.softmax(logits)
     77 
     78 # Define loss and optimizer
     79 # 定义损失和优化器
     80 # 交叉熵损失, 求均值得到---->loss_op
     81 loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
     82     logits=logits, labels=Y))
     83 # 优化器使用的是Adam算法优化器
     84 optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
     85 # 最小化损失得到---->可以训练的train_op
     86 train_op = optimizer.minimize(loss_op)
     87 
     88 # Evaluate model
     89 # 评估模型
     90 # tf.equal() 逐个元素进行判断,如果相等就是True,不相等,就是False。
     91 correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
     92 # tf.cast() 数据类型转换----> tf.reduce_mean() 再求均值
     93 accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
     94 
     95 # Initialize the variables (i.e. assign their default value)
     96 # 初始化这些变量(作用比如说,给他们分配随机默认值)
     97 init = tf.global_variables_initializer()
     98 
     99 # Start training
    100 # 现在开始训练啦!
    101 with tf.Session() as sess:
    102 
    103     # Run the initializer
    104     # 运行初始化器
    105     sess.run(init)
    106 
    107     for step in range(1, num_steps+1):
    108         # 每批次128个训练,取出这128个对应的data:x;标签:y
    109         batch_x, batch_y = mnist.train.next_batch(batch_size)
    110         # Run optimization op (backprop)
    111         # train_op是优化器得到的可以训练的op,通过反向传播优化模型
    112         sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
    113         # 每100步打印一次训练的成果
    114         if step % display_step == 0 or step == 1:
    115             # Calculate batch loss and accuracy
    116             # 计算每批次的是损失和准确度
    117             loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
    118                                                                  Y: batch_y})
    119             print("Step " + str(step) + ", Minibatch Loss= " + 
    120                   "{:.4f}".format(loss) + ", Training Accuracy= " + 
    121                   "{:.3f}".format(acc))
    122 
    123     print("Optimization Finished!")
    124 
    125     # Calculate accuracy for MNIST test images
    126     # 看看在测试集上,我们的模型表现如何
    127     print("Testing Accuracy:", 
    128         sess.run(accuracy, feed_dict={X: mnist.test.images,
    129                                       Y: mnist.test.labels}))
  • 相关阅读:
    [转载]杨建:网站加速--动态应用篇 (下)
    [转载]杨建:网站加速--动态应用篇 (下)
    [转载]正则表达式 30分钟入门 教程
    [转载]正则表达式 30分钟入门 教程
    Single Number
    数据库应该使用异步吗 Should my database calls be Asynchronous?
    C# return dynamic/anonymous type value as function result
    Entity Framework: 视图查询时重复返回第一行值, duplicate frst rows in resultset from a view
    wysiwyg editor
    shutdown computer in ad and ou
  • 原文地址:https://www.cnblogs.com/kongweisi/p/10996383.html
Copyright © 2011-2022 走看看