zoukankan      html  css  js  c++  java
  • CS20Chapter3

    waiting  P54 shuffle data

    03_Lecture note_Linear and Logistic Regression

    学习点1:

    python的地址输入是要不能用正斜杠的,要用  /  来做地址分段。 比如:

    # 打开一个文件
    f = open("/tmp/foo.txt", "w")
    
    f.write( "Python 是一个非常好的语言。
    是的,的确非常好!!
    " )
    
    # 关闭打开的文件
    f.close()

     Birth rate - life expectancy code:

    """ Solution for simple linear regression example using tf.data
    Created by Chip Huyen (chiphuyen@cs.stanford.edu)
    CS20: "TensorFlow for Deep Learning Research"
    cs20.stanford.edu
    Lecture 03
    """
    import os
    os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
    import time
    
    import numpy as np
    import matplotlib.pyplot as plt
    import tensorflow as tf
    
    import utils
    
    DATA_FILE = 'data/birth_life_2010.txt'
    
    # Step 1: read in the data
    data, n_samples = utils.read_birth_life_data(DATA_FILE)
    
    # Step 2: create Dataset and iterator
    dataset = tf.data.Dataset.from_tensor_slices((data[:,0], data[:,1]))
    
    iterator = dataset.make_initializable_iterator()
    X, Y = iterator.get_next()
    
    # Step 3: create weight and bias, initialized to 0
    w = tf.get_variable('weights', initializer=tf.constant(0.0))
    b = tf.get_variable('bias', initializer=tf.constant(0.0))
    
    # Step 4: build model to predict Y
    Y_predicted = X * w + b
    
    # Step 5: use the square error as the loss function
    loss = tf.square(Y - Y_predicted, name='loss')
    # loss = utils.huber_loss(Y, Y_predicted)
    
    # Step 6: using gradient descent with learning rate of 0.001 to minimize loss
    optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss)
    
    start = time.time() //开始时,记录一次时间
    with tf.Session() as sess:
        # Step 7: initialize the necessary variables, in this case, w and b
        sess.run(tf.global_variables_initializer()) 
        writer = tf.summary.FileWriter('./graphs/linear_reg', sess.graph)
        
        # Step 8: train the model for 100 epochs
        for i in range(100):
            sess.run(iterator.initializer) # initialize the iterator
            total_loss = 0
            try:
                while True:
                    _, l = sess.run([optimizer, loss]) 
                    total_loss += l
            except tf.errors.OutOfRangeError:
                pass
                
            print('Epoch {0}: {1}'.format(i, total_loss/n_samples))
    
        # close the writer when you're done using it
        writer.close() 
        
        # Step 9: output the values of w and b
        w_out, b_out = sess.run([w, b]) 
        print('w: %f, b: %f' %(w_out, b_out))
    print('Took: %f seconds' %(time.time() - start))  
    
    # plot the results
    plt.plot(data[:,0], data[:,1], 'bo', label='Real data')
    plt.plot(data[:,0], data[:,0] * w_out + b_out, 'r', label='Predicted data with squared error')
    # plt.plot(data[:,0], data[:,0] * (-5.883589) + 85.124306, 'g', label='Predicted data with Huber loss')
    plt.legend()
    plt.show()
  • 相关阅读:
    form表单为什么不能提交
    遇到了消息堆积,但是问题不大
    面试题:如何保证消息不丢失?处理重复消息?消息有序性?消息堆积处理?
    Dubbo学习地址
    Dubbo入门到实战2
    Dubbo入门到实战
    Mybatis 的三种执行器
    从源码理解Druid连接池原理
    Getting NoSuchMethodError:javax.servlet.ServletContext.getVirtualServerName()
    解决问题:org.apache.ibatis.binding.BindingException: Invalid bound statement (not found)
  • 原文地址:https://www.cnblogs.com/captain-dl/p/9294335.html
Copyright © 2011-2022 走看看