zoukankan      html  css  js  c++  java
  • 46、tensorflow入门初步,手写识别0,1,2,3,4,5,6

    1、使用tensorflow的SoftMax函数,对手写数字进行识别

    Administrator@SuperComputer MINGW64 ~
    $ docker run -it -p 8888:8888 registry.cn-hangzhou.aliyuncs.com/denverdino/tens
    orflow bash
    root@b3e200093da9:/notebooks# python
    Python 2.7.6 (default, Oct 26 2016, 20:30:19)
    [GCC 4.8.4] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from tensorflow.examples.tutorials.mnist import input_data
    -----------------------------------------------------------对于中间这个数据是怎么来的,我只能说是从网上下的,具体存放在哪个文件间中,我至今都没有找到
    >>> mnist = input_data.read_data_sets("/MNIST_data/",one_hot = True) Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes. Extracting /MNIST_data/train-images-idx3-ubyte.gz Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes. Extracting /MNIST_data/train-labels-idx1-ubyte.gz Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes. Extracting /MNIST_data/t10k-images-idx3-ubyte.gz Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes. Extracting /MNIST_data/t10k-labels-idx1-ubyte.gz >>> import tensorflow as tf >>> x = tf.placeholder(tf.float32,[None,784]) >>> W = tf.Variable(tf.zeros([784,10])) >>> b = tf.Variable(tf.zeros([10])) >>> y = tf.nn.softmax(tf.matmul(x,W)+b) >>> y_ = tf.placeholder(tf.float32,[None,10]) >>> cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_*tf.log(y),reduction_indices=[1])) >>> train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
    >>> init = tf.initialize_all_variables()//这个函数现在已经不用了,应该使用下边的那一行函数 WARNING:tensorflow:From <stdin>:1 in <module>.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02. Instructions for updating: Use `tf.global_variables_initializer` instead. >>> init = tf.global_variables_initializer() >>> sess = tf.Session() >>> sess.run(init) >>> for i in range(1000): ... batch_xs,batch_ys = mnist.train.next_batch(100) ... sess.run(train_step,feed_dict = {x:batch_xs,y_:batch_ys}) ... >>> correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1)) >>> accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) >>> print(sess.run(accuracy,feed_dict={x:mnist.test.images,y_:mnist.test.labels} )) 0.9167 >>>

    最后,训练后得到的模型在测试数据上的正确率是0.9167

  • 相关阅读:
    TeamX 专为中小团队思考的...团队协作工具
    8 月直播课抢先看 | 代码质量实战 + 微服务项目实战课程报名中
    CODING DevOps 代码质量实战系列第一课,本周开讲!
    CODING 现已支持墨刀原型引入
    CODING 企业微信小程序上线了
    CODING DevOps + Nginx-ingress 实现自动化灰度发布
    第二届腾讯运维技术开放日来啦!
    前端智造,内容新生
    kafka的特性初探
    弄懂一致性哈希后我打通了redis分区集群的原理
  • 原文地址:https://www.cnblogs.com/weizhen/p/6272222.html
Copyright © 2011-2022 走看看