zoukankan      html  css  js  c++  java
  • 卷积神经网络识别Mnist图片

    利用TensorFlow1.0搭建卷积神经网络用于识别MNIST数据集,算是深度学习里的hello world吧。虽然只有两个卷积层,但在训练集上的正确率已经基本达到100%了。

    代码如下:

    # Auther:Chaz
    from tensorflow.examples.tutorials.mnist import input_data
    import tensorflow as tf
    
    mnist = input_data.read_data_sets("MNIST_data/",one_hot=True)
    sess = tf.InteractiveSession()
    
    def weight_variable(shape):
        initial = tf.truncated_normal(shape,stddev=0.1)
        return tf.Variable(initial)
    def bias_variable(shape):
        initial = tf.constant(0.1,shape=shape)
        return tf.Variable(initial)
    def conv2d(x,W):
        return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME')
    def max_pool_2x2(x):
        return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')
    
    x = tf.placeholder(tf.float32,[None,784])
    y_ = tf.placeholder(tf.float32,[None,10])
    x_image = tf.reshape(x,[-1,28,28,1])
    
    W_conv1 = weight_variable([5,5,1,32])
    b_conv1 = bias_variable([32])
    h_conv1 = tf.nn.relu(conv2d(x_image,W_conv1)+b_conv1)
    h_pool1 = max_pool_2x2(h_conv1)
    
    W_conv2 = weight_variable([5,5,32,64])
    b_conv2 = bias_variable([64])
    h_conv2 = tf.nn.relu(conv2d(h_pool1,W_conv2)+b_conv2)
    h_pool2 = max_pool_2x2(h_conv2)
    
    W_fc1 = weight_variable([7*7*64,1024])
    b_fc1 = bias_variable([1024])
    h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64])
    h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1)+b_fc1)
    
    keep_prob = tf.placeholder(tf.float32)
    h_fc1_drop = tf.nn.dropout(h_fc1,keep_prob)
    W_fc2 = weight_variable([1024,10])
    b_fc2 = bias_variable([10])
    y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop,W_fc2)+b_fc2)
    
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_*tf.log(y_conv),reduction_indices=[1]))
    train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    
    correct_prediction = tf.equal(tf.argmax(y_conv,1),tf.argmax(y_,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
    
    tf.initialize_all_variables().run()#tensorflow 1.0
    for i in range(20000):
        batch = mnist.train.next_batch(50)
        if i%100 ==0:
            train_accuacy = accuracy.eval(feed_dict = {x:batch[0],y_:batch[1],keep_prob:1.0})
            print("step %d,train accuarcy %g"%(i,train_accuacy))
        train_step.run(feed_dict={x:batch[0],y_:batch[1],keep_prob:1.0})
    
    print("TEST ACCURACY %g" % accuracy.eval(feed_dict={x:mnist.test.images,y_:mnist.test.labels,keep_prob:1.0}))

    训练一共训练了3个多小时,训练效果应当很棒。

    但在测试集上,由于一次直接读入10000张图片,内存直接不够用,并没有测试出来。可以利用for循环分多次测试,求平均值。

    据说,测试集识别率达到了98%。

    还可以进行将训练结果进行保存,否则一次训练几个小时,时间也耗不起啊。

  • 相关阅读:
    leetcode Super Ugly Number
    leetcode Find Median from Data Stream
    leetcode Remove Invalid Parentheses
    leetcode Range Sum Query
    leetcode Range Sum Query
    leetcode Minimum Height Trees
    hdu 3836 Equivalent Sets
    hdu 1269 迷宫城堡
    hud 2586 How far away ?
    poj 1330 Nearest Common Ancestors
  • 原文地址:https://www.cnblogs.com/jackzone/p/6765222.html
Copyright © 2011-2022 走看看