zoukankan      html  css  js  c++  java
  • 深度学习之tensorflow (一)

    一、TensorFlow简介

    1.TensorFlow定义

       tensor  :张量,N维数组

       Flow   :  流,基于数据流图的计算

       TensorFlow : 张量从图像的一端流动到另一端的计算过程,是将复杂的数据结     构传输至人工智能神经网络中进行分析和处理的过程。


    2. 工作模式:

        图graphs表示计算任务,图中的节点称之为op(operation) ,一个 op可以获得0个      或多个张量(tensor),通过创建会话(session)对象来执行计算,产生0个或多个tensor。 

       其工作模式分为两步:(1)define the computation graph

                                            (2)run the graph (with data) in session


    3. 特点:

        (1)异步:一处写、一处读、一处训练

        (2)全局 : 操作添加到全局的graph中 , 监控添加到全局的summary中,

                参数/损失添加到全局的collection中

         (3)符号式的:创建时没有具体,运行时才传入


    二、   代码

    1 、定义神经网络的相关参数和变量

        

     1 # -*- coding: utf-8 -*-
     2 # version:python 3.5
     3 import tensorflow as tf
     4 from numpy.random import RandomState
     5 
     6 batch_size = 8
     7 x = tf.placeholder(tf.float32, shape=(None, 2), name="x-input")
     8 y_ = tf.placeholder(tf.float32, shape=(None, 1), name='y-input')
     9 w1= tf.Variable(tf.random_normal([2, 1], stddev=1, seed=1))
    10 y = tf.matmul(x, w1)
    View Code

    2、设置自定义的损失函数

         

    1 # 定义损失函数使得预测少了的损失大,于是模型应该偏向多的方向预测。
    2 loss_less = 10
    3 loss_more = 1
    4 loss = tf.reduce_sum(tf.where(tf.greater(y, y_), (y - y_) * loss_more, (y_ - y) * loss_less))
    5 train_step = tf.train.AdamOptimizer(0.001).minimize(loss)
    View Code

    3、生成模拟数据集

     

    1 rdm = RandomState(1)
    2 X = rdm.rand(128,2)
    3 Y = [[x1+x2+rdm.rand()/10.0-0.05] for (x1, x2) in X]
    View Code

    4、训练模型

     1 with tf.Session() as sess:
     2     init_op = tf.global_variables_initializer()
     3     sess.run(init_op)
     4     STEPS = 5000
     5     for i in range(STEPS):
     6         start = (i*batch_size) % 128
     7         end = (i*batch_size) % 128 + batch_size
     8         sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]})
     9         if i % 1000 == 0:
    10             print("After %d training step(s), w1 is: " % (i))
    11             print sess.run(w1), "
    "
    12     print "Final w1 is: 
    ", sess.run(w1)
    View Code

    结果:

    After 0 training step(s), w1 is: 
    [[-0.81031823]
     [ 1.4855988 ]] 
    
    After 1000 training step(s), w1 is: 
    [[ 0.01247112]
     [ 2.1385448 ]] 
    
    After 2000 training step(s), w1 is: 
    [[ 0.45567414]
     [ 2.17060661]] 
    
    After 3000 training step(s), w1 is: 
    [[ 0.69968724]
     [ 1.8465308 ]] 
    
    After 4000 training step(s), w1 is: 
    [[ 0.89886665]
     [ 1.29736018]] 
    
    Final w1 is: 
    [[ 1.01934695]
     [ 1.04280889]]

    5、重新定义损失函数,使得预测多了的损失大,于是模型应该偏向少的方向预测

     1 loss_less = 1
     2 loss_more = 10
     3 loss = tf.reduce_sum(tf.where(tf.greater(y, y_), (y - y_) * loss_more, (y_ - y) * loss_less))
     4 train_step = tf.train.AdamOptimizer(0.001).minimize(loss)
     5 
     6 with tf.Session() as sess:
     7     init_op = tf.global_variables_initializer()
     8     sess.run(init_op)
     9     STEPS = 5000
    10     for i in range(STEPS):
    11         start = (i*batch_size) % 128
    12         end = (i*batch_size) % 128 + batch_size
    13         sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]})
    14         if i % 1000 == 0:
    15             print("After %d training step(s), w1 is: " % (i))
    16             print sess.run(w1), "
    "
    17     print "Final w1 is: 
    ", sess.run(w1)
    View Code

    结果:

    After 0 training step(s), w1 is: 
    [[-0.81231821]
     [ 1.48359871]] 
    
    After 1000 training step(s), w1 is: 
    [[ 0.18643527]
     [ 1.07393336]] 
    
    After 2000 training step(s), w1 is: 
    [[ 0.95444274]
     [ 0.98088616]] 
    
    After 3000 training step(s), w1 is: 
    [[ 0.95574027]
     [ 0.9806633 ]] 
    
    After 4000 training step(s), w1 is: 
    [[ 0.95466018]
     [ 0.98135227]] 
    
    Final w1 is: 
    [[ 0.95525807]
     [ 0.9813394 ]]
    
    
  • 相关阅读:
    冲刺第十三天
    冲刺第十二天
    冲刺第十一天
    Android Studio三种运行方法
    第十三周学习进度
    冲刺第一阶段意见评论
    第十二周学习进度
    冲刺第十天
    大二暑假周总结(五)
    大二暑假周总结(四)
  • 原文地址:https://www.cnblogs.com/BigStupid/p/7821327.html
Copyright © 2011-2022 走看看