zoukankan      html  css  js  c++  java
  • 软培

    机器学习的概念

    有 监 督 学 习(Supervised Learning):

    [x, y ]

    输入高维向量(Vector)并同时输入其标签(Label),通过建模(Modeling),使得机器能够自动在模型中计算最为合适的参数值(Parameter)。最终,使得我们获得一个参数确定的模型,并在未来输入新的高维向量时可以预测其Label值,使得Label的误差在所允许的范围之内。

    无 监 督 学 习(Non-Supervised Learning):

    [x ]

    输入高维向量(Vector)并,通过建模(Modeling),使得机器能够计算出模型中相应的待定参数值(Parameter)。

    典型的监督学习

    贝叶斯概率

    概率是一种事情发生可能性的描述.

    1. 基于统计的认知

    1.1 重要原则:
    - 结论来源于观测
    - 定量描述优于定性描述
    - 结论条件明确,叙述简洁,有好的泛化性
    - 高度自洽性,高度可验证性

    1.2 古典概型(静态概型)
    - 试验只有有限个基本结果
    - 试验的每个基本结果出现的可能性是一样的

    1.3 基于统计的概率

    • 在一定条件下,重复做(n)次试验(对一个事情观测(n)次),一个事件(A)发生的概率(P(A))

    [P(A) = frac{A发生次数}{总观测次数} ]

    • (A与B正相关, P(A)<P(A|B))
    • (A与B负相关, P(A)>P(A|B))

    朴素贝叶斯分类器

    维基百科:所有朴素贝叶斯分类器都假定样本每个特征与其他特征都不相关。

    一个普通的规则就是选出最有可能的那个:这就是大家熟知的最大后验概率(MAP)决策准则。

    [classify{f_1,dots,f_n} = underset{c}{operatorname{argmax}} p(C=c) displaystyleprod_{i=1}^n p(F_i=f_ivert C=c) ]

    高斯朴素贝叶斯

    如果要处理的是连续数据一种通常的假设是这些连续数值为高斯分布。

    例如,假设训练集中有一个连续属性,(x)。我们首先对数据根据类别分类,然后计算每个类别中(x)的均值和方差。令(mu_c)表示为(x)在''c''类上的均值,令(sigma^2_c)(x)在''c''类上的方差。在给定类中某个值的概率,(P(x=v|c)),可以通过将(v)表示为均值为(mu_c)方差为(sigma^2_c)正态分布计算出来。如下,

    [P(x=v|c)= frac{1}{sqrt{2pisigma^2_c}}\,e^{ -frac{(v-mu_c)^2}{2sigma^2_c} } ]

    处理连续数值问题的另一种常用的技术是通过离散化连续数值的方法。通常,当训练样本数量较少或者是精确的分布已知时,通过概率分布的方法是一种更好的选择。在大量样本的情形下离散化的方法表现更优,因为大量的样本可以学习到数据的分布。由于朴素贝叶斯是一种典型的用到大量样本的方法(越大计算量的模型可以产生越高的分类精确度),所以朴素贝叶斯方法都用到离散化方法,而不是概率分布估计的方法。

    from sklearn.naive_bayes import GaussianNB
    # 0:晴 1:阴 2:降水 3:多云
    data_table =[["date", "weather"],[1, 0],[2, 1],[3, 2],
                 [4, 1],[5, 2],[6, 0],[7, 0],[8, 3],[9, 1],[10, 1]]
    #当天的天气
    X = [[0], [1], [2], [1], [2], [0], [0], [3], [1]]
    #当天的天气对应后一天的天气
    y = [1, 2, 1, 2, 0, 0, 3, 1, 1]
    #现在把训练数据和对应的分类放入分类器中进行训练
    clf = GaussianNB().fit(X, y)
    p = [[1]]
    print(clf.predict(p))
    
    [2]
    

    hello.py

    import tensorflow as tf
    hello = tf.constant('Hello, TensorFlow!')
    sess = tf.Session()
    print(sess.run(hello))
    a = tf.constant(10)
    b = tf.constant(32)
    print(sess.run(a + b))
    
    b'Hello, TensorFlow!'
    42
    

    实验指导:

    实现一个线性回归的例子

    # ex-01.py
    import tensorflow as tf
    import numpy as np
    
    #create some training data
    x_data = np.random.rand(100).astype(np.float32)
    y_data = x_data*1 + 3
    
    print ("x_data:")
    print (x_data)
    
    print ("y_data:")
    print( y_data)
    
    #create the weights and bias variables
    weights = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
    print ("weights before initializing:")
    print (weights)
    
    biases = tf.Variable(tf.zeros([1]))
    print ("bias before initializing:")
    print (biases)
    
    #predict (fit) value
    y = weights*x_data + biases
    
    #loss function
    loss = tf.reduce_mean(tf.square(y - y_data))
    
    #optimizer definition
    optimizer = tf.train.GradientDescentOptimizer(0.1)
    
    #train definition
    train = optimizer.minimize(loss)
    
    #initialiing the variables
    init = tf.initialize_all_variables()
    
    #session definition and active
    sess = tf.Session()
    sess.run(init)
    
    #train the model
    for step in range(501):
        sess.run(train)
        if step % 10 == 0:
            print (step,sess.run(weights),sess.run(biases))
    
    x_data:
    [ 0.89402199  0.10053575  0.55701882  0.53727293  0.36683112  0.44201213
      0.42864946  0.33498201  0.55696607  0.40338814  0.99002725  0.23393948
      0.40767717  0.20491761  0.10732751  0.64065552  0.95823038  0.36961049
      0.43446437  0.05964461  0.39571118  0.36884421  0.92716092  0.85938799
      0.61906868  0.28850925  0.33652243  0.34363723  0.19535853  0.12228432
      0.87395531  0.94348276  0.58827281  0.42699504  0.68471229  0.49138424
      0.60810924  0.59445798  0.48714778  0.84556079  0.71666372  0.73018974
      0.45137393  0.89610088  0.23163974  0.57911086  0.59919071  0.14642832
      0.16170129  0.93183321  0.03960106  0.02113333  0.26829427  0.05438332
      0.26300624  0.25510544  0.47461808  0.42821345  0.13820758  0.61761445
      0.95918888  0.57477629  0.80746269  0.9087519   0.29300612  0.2298919
      0.47188538  0.76722085  0.5297811   0.20937359  0.86967862  0.11050695
      0.41093281  0.43827012  0.63228905  0.29376149  0.35848793  0.44383112
      0.11717488  0.82729691  0.06545471  0.2652818   0.00641522  0.54356593
      0.69357717  0.9950943   0.72574335  0.77202976  0.6673649   0.33663997
      0.93204492  0.86043173  0.88384038  0.27974007  0.33629051  0.64907324
      0.77879322  0.63549143  0.0160666   0.56776083]
    y_data:
    [ 3.89402199  3.10053587  3.55701876  3.53727293  3.36683106  3.44201207
      3.42864943  3.33498192  3.55696607  3.40338802  3.99002719  3.23393941
      3.40767717  3.20491767  3.10732746  3.64065552  3.9582305   3.36961055
      3.43446445  3.0596447   3.39571118  3.36884427  3.92716098  3.85938787
      3.61906862  3.28850937  3.33652234  3.34363723  3.19535851  3.12228441
      3.87395525  3.94348288  3.58827281  3.42699504  3.68471241  3.49138427
      3.60810924  3.5944581   3.48714781  3.84556079  3.71666384  3.7301898
      3.45137405  3.896101    3.23163986  3.57911086  3.59919071  3.14642835
      3.1617012   3.93183327  3.03960109  3.02113342  3.26829433  3.05438328
      3.26300621  3.2551055   3.47461796  3.42821336  3.13820767  3.61761451
      3.95918894  3.57477617  3.80746269  3.90875196  3.29300618  3.22989178
      3.47188544  3.76722097  3.5297811   3.20937347  3.8696785   3.11050701
      3.41093278  3.43827009  3.63228893  3.29376149  3.35848784  3.44383121
      3.11717486  3.82729697  3.06545472  3.26528168  3.00641513  3.54356599
      3.69357729  3.9950943   3.72574329  3.77202988  3.66736484  3.33663988
      3.93204498  3.86043167  3.88384032  3.2797401   3.3362906   3.64907312
      3.77879333  3.63549137  3.01606655  3.56776094]
    weights before initializing:
    <tf.Variable 'Variable:0' shape=(1,) dtype=float32_ref>
    bias before initializing:
    <tf.Variable 'Variable_1:0' shape=(1,) dtype=float32_ref>
    WARNING:tensorflow:From C:ProgramDataAnaconda3libsite-packages	ensorflowpythonutil	f_should_use.py:170: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
    Instructions for updating:
    Use `tf.global_variables_initializer` instead.
    0 [ 0.90101707] [ 0.64157164]
    10 [ 1.73624611] [ 2.48117089]
    20 [ 1.6977222] [ 2.62674093]
    30 [ 1.62073231] [ 2.67402244]
    40 [ 1.55015469] [ 2.7114203]
    50 [ 1.48748767] [ 2.74431014]
    60 [ 1.43195248] [ 2.77343965]
    70 [ 1.3827436] [ 2.79925013]
    80 [ 1.33914053] [ 2.82212019]
    90 [ 1.30050492] [ 2.84238434]
    100 [ 1.266271] [ 2.86034036]
    110 [ 1.23593676] [ 2.87625074]
    120 [ 1.2090584] [ 2.8903482]
    130 [ 1.18524206] [ 2.9028399]
    140 [ 1.16413891] [ 2.91390872]
    150 [ 1.14543998] [ 2.92371631]
    160 [ 1.12887108] [ 2.93240666]
    170 [ 1.11418998] [ 2.94010711]
    180 [ 1.10118115] [ 2.94693041]
    190 [ 1.08965421] [ 2.95297623]
    200 [ 1.07944047] [ 2.95833325]
    210 [ 1.07039058] [ 2.96307993]
    220 [ 1.06237149] [ 2.96728611]
    230 [ 1.0552659] [ 2.97101283]
    240 [ 1.04896998] [ 2.97431517]
    250 [ 1.04339111] [ 2.97724128]
    260 [ 1.03844786] [ 2.97983384]
    270 [ 1.03406775] [ 2.98213148]
    280 [ 1.03018677] [ 2.9841671]
    290 [ 1.0267477] [ 2.98597097]
    300 [ 1.02370059] [ 2.98756886]
    310 [ 1.02100062] [ 2.98898506]
    320 [ 1.01860821] [ 2.99023986]
    330 [ 1.01648819] [ 2.99135184]
    340 [ 1.01460981] [ 2.99233723]
    350 [ 1.01294541] [ 2.99321008]
    360 [ 1.01147056] [ 2.99398375]
    370 [ 1.01016378] [ 2.99466896]
    380 [ 1.00900602] [ 2.99527621]
    390 [ 1.00797999] [ 2.99581456]
    400 [ 1.00707078] [ 2.9962914]
    410 [ 1.00626528] [ 2.99671388]
    420 [ 1.00555134] [ 2.99708843]
    430 [ 1.00491893] [ 2.99742007]
    440 [ 1.00435853] [ 2.99771404]
    450 [ 1.0038619] [ 2.9979744]
    460 [ 1.00342214] [ 2.99820518]
    470 [ 1.00303233] [ 2.99840927]
    480 [ 1.00268698] [ 2.99859095]
    490 [ 1.00238085] [ 2.9987514]
    500 [ 1.00210965] [ 2.9988935]
    

    实现一个矩阵乘法计算

    # ex-02.py
    import tensorflow as tf
    
    matrix1 = tf.constant([[3,3]])
    
    matrix2 = tf.constant([[1],
                            [2]])
    
    #matrix multiply
    product = tf.matmul(matrix1, matrix2)
    
    #method1
    #sess = tf.Session()
    #result = sess.run(product)
    #print "result:",result
    #sess.close()
    
    #method2
    with tf.Session() as sess:
        result2 = sess.run(product)
        print ("result2:",result2)
    
    
    result2: [[9]]
    

    常量/变量/赋值

    # ex-03.py
    import tensorflow as tf
    
    state = tf.Variable(10, name='counter')
    
    print( state.name)
    
    one = tf.constant(1)
    
    new_value = tf.add(state, one)
    
    update = tf.assign(state, new_value)
    
    #this initializing is very important
    init = tf.global_variables_initializer()
    
    with tf.Session() as sess:
        sess.run(init)
        for x in range(10):
            sess.run(update)
        # must do sess.run to operate the variables
            print(sess.run(state))
    
    
    counter_1:0
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    

    Placeholder用法

    # ex-04.py
    import tensorflow as tf
    
    input1 = tf.placeholder(tf.float32)
    input2 = tf.placeholder(tf.float32)
    
    output = input1 * input2
    
    with tf.Session() as sess:
        print(sess.run(output, feed_dict={input1: 4, input2: 2}))
        print(sess.run(output, feed_dict={input1: [4,2], input2: [2,7]}))
    
    8.0
    [  8.  14.]
    
    # ex-05.py
    import tensorflow as tf
    import numpy as np
    
    def add_layer(inputs, in_size, out_size, activation_function=None):
        weights = tf.Variable(tf.random_normal([in_size, out_size]))
        biases = tf.Variable(tf.zeros([1,out_size]))
        wx_b = tf.matmul(inputs, weights) + biases
        if activation_function is None:
            outputs = wx_b
        else:
            outputs = activation_function(wx_b)
        return outputs
    
    #make some input value
    x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
    noise = np.random.normal(0, 0.05, x_data.shape)
    y_data = np.square(x_data) + noise
    
    xs = tf.placeholder(tf.float32, [None,1])
    ys = tf.placeholder(tf.float32, [None,1])
    
    layer1 = add_layer(xs, 1, 10, activation_function=tf.nn.sigmoid)
    prediction = add_layer(layer1, 10, 1, activation_function=None)
    
    loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))
    
    train = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
    
    init = tf.global_variables_initializer()
    
    sess = tf.Session()
    sess.run(init)
    
    for x in range(1000):
        sess.run(train, feed_dict = { xs: x_data, ys: y_data})
        if x % 10==0:
            print (sess.run(loss, feed_dict={xs:x_data, ys: y_data}))
    
    1.84078
    0.633829
    0.382142
    0.309708
    0.274133
    0.248449
    0.227182
    0.208942
    0.19316
    0.179466
    0.167569
    0.157221
    0.148213
    0.140364
    0.13352
    0.127546
    0.122329
    0.117768
    0.113778
    0.110283
    0.107221
    0.104534
    0.102175
    0.100101
    0.0982751
    0.0966661
    0.095246
    0.0939907
    0.0928792
    0.0918932
    0.0910167
    0.0902358
    0.0895383
    0.0889138
    0.0883529
    0.0878477
    0.087391
    0.0869768
    0.0865996
    0.0862548
    0.0859384
    0.0856467
    0.0853765
    0.0851253
    0.0848907
    0.0846704
    0.0844628
    0.0842661
    0.0840792
    0.0839006
    0.0837294
    0.0835647
    0.0834056
    0.0832514
    0.0831015
    0.0829553
    0.0828125
    0.0826725
    0.082535
    0.0823997
    0.0822664
    0.0821347
    0.0820044
    0.0818755
    0.0817476
    0.0816208
    0.0814948
    0.0813695
    0.0812449
    0.0811209
    0.0809973
    0.0808742
    0.0807514
    0.080629
    0.0805069
    0.080385
    0.0802633
    0.0801419
    0.0800206
    0.0798994
    0.0797783
    0.0796574
    0.0795365
    0.0794157
    0.079295
    0.0791743
    0.0790537
    0.0789332
    0.0788126
    0.0786921
    0.0785716
    0.0784511
    0.0783306
    0.0782102
    0.0780897
    0.0779693
    0.0778488
    0.0777284
    0.0776079
    0.0774875
    

    可视化

    tensorboard --logdir='logs/'

    注意:不能在notebook直接运行。

    ex-06-vis.py

    import time
    import tensorflow as tf
    import numpy as np
    import matplotlib.pyplot as plt
    
    def add_layer(inputs, in_size, out_size, activation_function=None):
        weights = tf.Variable(tf.random_normal([in_size, out_size]))
        biases = tf.Variable(tf.zeros([1,out_size]))
        wx_b = tf.matmul(inputs, weights) + biases
        if activation_function is None:
            outputs = wx_b
        else:
            outputs = activation_function(wx_b)
        return outputs
    
    #make some input value
    x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
    noise = np.random.normal(0, 0.05, x_data.shape)
    y_data = np.square(x_data) + noise
    
    xs = tf.placeholder(tf.float32, [None,1])
    ys = tf.placeholder(tf.float32, [None,1])
    
    layer1 = add_layer(xs, 1, 10, activation_function=tf.nn.sigmoid)
    prediction = add_layer(layer1, 10, 1, activation_function=None)
    
    loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))
    
    train = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
    
    init = tf.global_variables_initializer()
    
    sess = tf.Session()
    sess.run(init)
    
    #added coding
    fig = plt.figure()
    ax = fig.add_subplot(1,1,1)
    ax.scatter(x_data, y_data)
    plt.ion()
    plt.show()
    
    for x in range(1000):
        sess.run(train, feed_dict = { xs: x_data, ys: y_data})
        if x % 10==0:
            # print (sess.run(loss, feed_dict={xs:x_data, ys: y_data}))
            
            try:
                ax.lines.remove(lines[0])
            except Exception:
                pass
            prediction_value = sess.run(prediction, feed_dict={xs:x_data})
            lines = ax.plot(x_data, prediction_value, 'r-', lw=5)
            plt.pause(0.1)
    
    import time
    for i in [1,2,34,5]:
        print(i)
        time.sleep(0.1)
    
    1
    2
    34
    5
    

    ex-09-tb.py

    # from ex-05
    import tensorflow as tf
    import numpy as np
    
    def add_layer(inputs, in_size, out_size, n_layer, activation_function=None):
        layer_name = 'layer%s' % n_layer
        with tf.name_scope('layer'):
            with tf.name_scope('weights'):
                weights = tf.Variable(tf.random_normal([in_size, out_size]), name='W')
                tf.summary.histogram(layer_name + '/weights', weights)
            with tf.name_scope('bias'):
                biases = tf.Variable(tf.zeros([1,out_size]), name='b')
                tf.summary.histogram(layer_name + '/biases', biases)
            with tf.name_scope('wx_plus_b'):
                wx_b = tf.add(tf.matmul(inputs, weights), biases)
            if activation_function is None:
                outputs = wx_b
            else:
                outputs = activation_function(wx_b, name='output')
                tf.summary.histogram(layer_name + '/outputs', outputs)
        return outputs
    
    #make some input value
    x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
    noise = np.random.normal(0, 0.05, x_data.shape)
    y_data = np.square(x_data) + noise
    
    with tf.name_scope('inputs'):
        xs = tf.placeholder(tf.float32, [None,1], name='x_input')
        ys = tf.placeholder(tf.float32, [None,1], name='y_input')
    
    layer1 = add_layer(xs, 1, 10, n_layer=1, activation_function=tf.nn.sigmoid)
    prediction = add_layer(layer1, 10, 1, n_layer=2, activation_function=None)
    
    with tf.name_scope('loss'):
        loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))
        tf.summary.scalar('loss', loss)
    
    with tf.name_scope('train'):
        train = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
    
    init = tf.global_variables_initializer()
    
    sess = tf.Session()
    sess.run(init)
    
    merged = tf.summary.merge_all()
    
    writer = tf.summary.FileWriter("logs/", sess.graph)
    
    for x in range(50000):
        sess.run(train, feed_dict = { xs: x_data, ys: y_data})
        if x % 50==0:
            sess.run(loss, feed_dict={xs:x_data, ys:y_data})
            result = sess.run(merged, feed_dict={xs:x_data, ys:y_data})
            writer.add_summary(result, x)
    
    import tensorflow as tf
    
    a = tf.constant(5, name="input_a")
    b = tf.constant(3, name="input_b")
    c = tf.multiply(a, b, name="mul_c")
    d = tf.add(a, b, name="add_d")
    e = tf.add(c, d, name="add_e")
    
    sess = tf.Session()
    sess.run(e)
    
    writer = tf.summary.FileWriter("E:/tensorflow/graph", tf.get_default_graph())
    writer.close()
    

    ex-10.py

    import tensorflow as tf
    import gzip
    import numpy
    import collections
    from tensorflow.python.framework import random_seed
    from tensorflow.python.framework import dtypes
    #from tensorflow.examples.tutorials.mnist import input_data
    
    Datasets = collections.namedtuple('Datasets', ['train', 'validation', 'test'])
    
    def _read32(bytestream):
        dt = numpy.dtype(numpy.uint32).newbyteorder('>')
        return numpy.frombuffer(bytestream.read(4), dtype=dt)[0]
    
    def dense_to_one_hot(labels_dense, num_classes):
        """Convert class labels from scalars to one-hot vectors."""
        num_labels = labels_dense.shape[0]
        index_offset = numpy.arange(num_labels) * num_classes
        labels_one_hot = numpy.zeros((num_labels, num_classes))
        labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
        return labels_one_hot
    
    
    def extract_images(f):
      """Extract the images into a 4D uint8 numpy array [index, y, x, depth].
      Args:
        f: A file object that can be passed into a gzip reader.
      Returns:
        data: A 4D uint8 numpy array [index, y, x, depth].
      Raises:
        ValueError: If the bytestream does not start with 2051.
      """
      print('Extracting', f.name)
      with gzip.GzipFile(fileobj=f) as bytestream:
        magic = _read32(bytestream)
        if magic != 2051:
          raise ValueError('Invalid magic number %d in MNIST image file: %s' %
                           (magic, f.name))
        num_images = _read32(bytestream)
        rows = _read32(bytestream)
        cols = _read32(bytestream)
        buf = bytestream.read(rows * cols * num_images)
        data = numpy.frombuffer(buf, dtype=numpy.uint8)
        data = data.reshape(num_images, rows, cols, 1)
        return data
    
    def extract_labels(f, one_hot=False, num_classes=10):
      """Extract the labels into a 1D uint8 numpy array [index].
      Args:
        f: A file object that can be passed into a gzip reader.
        one_hot: Does one hot encoding for the result.
        num_classes: Number of classes for the one hot encoding.
      Returns:
        labels: a 1D uint8 numpy array.
      Raises:
        ValueError: If the bystream doesn't start with 2049.
      """
      print('Extracting', f.name)
      with gzip.GzipFile(fileobj=f) as bytestream:
        magic = _read32(bytestream)
        if magic != 2049:
          raise ValueError('Invalid magic number %d in MNIST label file: %s' %
                           (magic, f.name))
        num_items = _read32(bytestream)
        buf = bytestream.read(num_items)
        labels = numpy.frombuffer(buf, dtype=numpy.uint8)
        if one_hot:
          return dense_to_one_hot(labels, num_classes)
        return labels
    
    def read_data_sets(train_dir,
                       fake_data=False,
                       one_hot=False,
                       dtype=tf.float32,
                       reshape=True,
                       validation_size=5000,
                       seed=None):
      '''
      if fake_data:
        def fake():
          return DataSet(
              [], [], fake_data=True, one_hot=one_hot, dtype=dtype, seed=seed)
      
        train = fake()
        validation = fake()
        test = fake()
        return base.Datasets(train=train, validation=validation, test=test)
      '''
    
      TRAIN_IMAGES = 'train-images-idx3-ubyte.gz'
      TRAIN_LABELS = 'train-labels-idx1-ubyte.gz'
      TEST_IMAGES = 't10k-images-idx3-ubyte.gz'
      TEST_LABELS = 't10k-labels-idx1-ubyte.gz'
    
      local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TRAIN_IMAGES
      with open(local_file, 'rb') as f:
        train_images = extract_images(f)
    
                                       
      local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TRAIN_LABELS
      with open(local_file, 'rb') as f:
        train_labels = extract_labels(f, one_hot=one_hot)
    
      local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TEST_IMAGES
      with open(local_file, 'rb') as f:
        test_images = extract_images(f)
    
      local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TEST_LABELS
         
      with open(local_file, 'rb') as f:
        test_labels = extract_labels(f, one_hot=one_hot)
    
      if not 0 <= validation_size <= len(train_images):
        raise ValueError(
            'Validation size should be between 0 and {}. Received: {}.'
            .format(len(train_images), validation_size))
    
      validation_images = train_images[:validation_size]
      validation_labels = train_labels[:validation_size]
      train_images = train_images[validation_size:]
      train_labels = train_labels[validation_size:]
    
      options = dict(dtype=dtype, reshape=reshape, seed=seed)
      
      train = DataSet(train_images, train_labels, **options)
      validation = DataSet(validation_images, validation_labels, **options)
      test = DataSet(test_images, test_labels, **options)
      
      return Datasets(train=train, validation=validation, test=test)
    
    class DataSet(object):
    
      def __init__(self,
                   images,
                   labels,
                   fake_data=False,
                   one_hot=False,
                   dtype=tf.float32,
                   reshape=True,
                   seed=None):
        """Construct a DataSet.
        one_hot arg is used only if fake_data is true.  `dtype` can be either
        `uint8` to leave the input as `[0, 255]`, or `float32` to rescale into
        `[0, 1]`.  Seed arg provides for convenient deterministic testing.
        """
        seed1, seed2 = random_seed.get_seed(seed)
        # If op level seed is not set, use whatever graph level seed is returned
        numpy.random.seed(seed1 if seed is None else seed2)
        dtype = dtypes.as_dtype(dtype).base_dtype
        if dtype not in (dtypes.uint8, dtypes.float32):
          raise TypeError('Invalid image dtype %r, expected uint8 or float32' %
                          dtype)
        if fake_data:
          self._num_examples = 10000
          self.one_hot = one_hot
        else:
          assert images.shape[0] == labels.shape[0], (
              'images.shape: %s labels.shape: %s' % (images.shape, labels.shape))
          self._num_examples = images.shape[0]
    
          # Convert shape from [num examples, rows, columns, depth]
          # to [num examples, rows*columns] (assuming depth == 1)
          if reshape:
            assert images.shape[3] == 1
            images = images.reshape(images.shape[0],
                                    images.shape[1] * images.shape[2])
          if dtype == dtypes.float32:
            # Convert from [0, 255] -> [0.0, 1.0].
            images = images.astype(numpy.float32)
            images = numpy.multiply(images, 1.0 / 255.0)
        self._images = images
        self._labels = labels
        self._epochs_completed = 0
        self._index_in_epoch = 0
    
      @property
      def images(self):
        return self._images
    
      @property
      def labels(self):
        return self._labels
    
      @property
      def num_examples(self):
        return self._num_examples
    
      @property
      def epochs_completed(self):
        return self._epochs_completed
    
      def next_batch(self, batch_size, fake_data=False, shuffle=True):
        """Return the next `batch_size` examples from this data set."""
        if fake_data:
          fake_image = [1] * 784
          if self.one_hot:
            fake_label = [1] + [0] * 9
          else:
            fake_label = 0
          return [fake_image for _ in xrange(batch_size)], [
              fake_label for _ in xrange(batch_size)
          ]
        start = self._index_in_epoch
        # Shuffle for the first epoch
        if self._epochs_completed == 0 and start == 0 and shuffle:
          perm0 = numpy.arange(self._num_examples)
          numpy.random.shuffle(perm0)
          self._images = self.images[perm0]
          self._labels = self.labels[perm0]
        # Go to the next epoch
        if start + batch_size > self._num_examples:
          # Finished epoch
          self._epochs_completed += 1
          # Get the rest examples in this epoch
          rest_num_examples = self._num_examples - start
          images_rest_part = self._images[start:self._num_examples]
          labels_rest_part = self._labels[start:self._num_examples]
          # Shuffle the data
          if shuffle:
            perm = numpy.arange(self._num_examples)
            numpy.random.shuffle(perm)
            self._images = self.images[perm]
            self._labels = self.labels[perm]
          # Start next epoch
          start = 0
          self._index_in_epoch = batch_size - rest_num_examples
          end = self._index_in_epoch
          images_new_part = self._images[start:end]
          labels_new_part = self._labels[start:end]
          return numpy.concatenate((images_rest_part, images_new_part), axis=0) , numpy.concatenate((labels_rest_part, labels_new_part), axis=0)
        else:
          self._index_in_epoch += batch_size
          end = self._index_in_epoch
          return self._images[start:end], self._labels[start:end]
    
    ################################################################
    #from here to operate the network
    
    mnist = read_data_sets('MNIST_data', one_hot=True)
    
    def add_layer(inputs, in_size, out_size, activation_function=None):
      weights = tf.Variable(tf.random_normal([in_size, out_size]))
      biases = tf.Variable(tf.zeros([1, out_size]) + 0.1,)
      wx_b = tf.matmul(inputs, weights) + biases
      if activation_function is None:
        outputs = wx_b
      else:
        outputs = activation_function(wx_b,)
      return outputs
    
    xs = tf.placeholder(tf.float32, [None, 28*28])
    ys = tf.placeholder(tf.float32, [None, 10])
    
    def compute_accuracy(v_xs, v_ys):
      global prediction
      y_pre = sess.run(prediction, feed_dict={xs: v_xs})
      correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1))
      accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
      result = sess.run(accuracy, feed_dict={xs:v_xs, ys:v_ys})
      return result
    
    prediction = add_layer(xs, 784, 10, activation_function = tf.nn.softmax)
    
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), 
                                  reduction_indices=[1]))
    
    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)
    
    sess = tf.Session()
    
    sess.run(tf.initialize_all_variables())
    
    for i in range(10000):
      batch_xs, batch_ys = mnist.train.next_batch(100)
      sess.run(train_step, feed_dict = {xs: batch_xs, ys: batch_ys})
      if i % 50 ==0:
        print "step:",i,", ",compute_accuracy(mnist.test.images, mnist.test.labels)
    
    
      File "<ipython-input-12-d56c6b11bc6b>", line 279
        print "step:",i,", ",compute_accuracy(mnist.test.images, mnist.test.labels)
                    ^
    SyntaxError: invalid syntax
    
    # ex-10-ml.py
    
    # import tensorflow as tf
    import gzip
    import numpy
    import collections
    from tensorflow.python.framework import random_seed
    from tensorflow.python.framework import dtypes
    #from tensorflow.examples.tutorials.mnist import input_data
    
    Datasets = collections.namedtuple('Datasets', ['train', 'validation', 'test'    ])
    
    def _read32(bytestream):
      dt = numpy.dtype(numpy.uint32).newbyteorder('>')
      return numpy.frombuffer(bytestream.read(4), dtype=dt)[0]
    
    def dense_to_one_hot(labels_dense, num_classes):
      """Convert class labels from scalars to one-hot vectors."""
      num_labels = labels_dense.shape[0]
      index_offset = numpy.arange(num_labels) * num_classes
      labels_one_hot = numpy.zeros((num_labels, num_classes))
      labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
      return labels_one_hot
    
    
    def extract_images(f):
      """Extract the images into a 4D uint8 numpy array [index, y, x, depth].
      Args:
        f: A file object that can be passed into a gzip reader.
      Returns:
        data: A 4D uint8 numpy array [index, y, x, depth].
      Raises:
        ValueError: If the bytestream does not start with 2051.
      """
      print('Extracting', f.name)
      with gzip.GzipFile(fileobj=f) as bytestream:
        magic = _read32(bytestream)
        if magic != 2051:
          raise ValueError('Invalid magic number %d in MNIST image file: %s' %
                           (magic, f.name))
        num_images = _read32(bytestream)
        rows = _read32(bytestream)
        cols = _read32(bytestream)
        buf = bytestream.read(rows * cols * num_images)
        data = numpy.frombuffer(buf, dtype=numpy.uint8)
        data = data.reshape(num_images, rows, cols, 1)
        return data
    
    def extract_labels(f, one_hot=False, num_classes=10):
      """Extract the labels into a 1D uint8 numpy array [index].
      Args:
        f: A file object that can be passed into a gzip reader.
        one_hot: Does one hot encoding for the result.
        num_classes: Number of classes for the one hot encoding.
      Returns:
        labels: a 1D uint8 numpy array.
      Raises:
        ValueError: If the bystream doesn't start with 2049.
      """
      print('Extracting', f.name)
      with gzip.GzipFile(fileobj=f) as bytestream:
        magic = _read32(bytestream)
        if magic != 2049:
          raise ValueError('Invalid magic number %d in MNIST label file: %s' %
                           (magic, f.name))
        num_items = _read32(bytestream)
        buf = bytestream.read(num_items)
        labels = numpy.frombuffer(buf, dtype=numpy.uint8)
        if one_hot:
          return dense_to_one_hot(labels, num_classes)
        return labels
    
    def read_data_sets(train_dir,
                       fake_data=False,
                       one_hot=False,
                       dtype=tf.float32,
                       reshape=True,
                       validation_size=5000,
                       seed=None):
      '''
      if fake_data:
        def fake():
          return DataSet(
              [], [], fake_data=True, one_hot=one_hot, dtype=dtype, seed=seed)
      
        train = fake()
        validation = fake()
        test = fake()
        return base.Datasets(train=train, validation=validation, test=test)
      '''
    
      TRAIN_IMAGES = 'train-images-idx3-ubyte.gz'
      TRAIN_LABELS = 'train-labels-idx1-ubyte.gz'
      TEST_IMAGES = 't10k-images-idx3-ubyte.gz'
      TEST_LABELS = 't10k-labels-idx1-ubyte.gz'
    
      local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TRAIN_IMAGES
      with open(local_file, 'rb') as f:
        train_images = extract_images(f)
    
                                       
      local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TRAIN_LABELS
      with open(local_file, 'rb') as f:
        train_labels = extract_labels(f, one_hot=one_hot)
    
      local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TEST_IMAGES
      with open(local_file, 'rb') as f:
        test_images = extract_images(f)
    
      local_file = '/home/gao/dl_learning/py-example/MNIST_data/' + TEST_LABELS
         
      with open(local_file, 'rb') as f:
        test_labels = extract_labels(f, one_hot=one_hot)
    
      if not 0 <= validation_size <= len(train_images):
        raise ValueError(
            'Validation size should be between 0 and {}. Received: {}.'
            .format(len(train_images), validation_size))
    
      validation_images = train_images[:validation_size]
      validation_labels = train_labels[:validation_size]
      train_images = train_images[validation_size:]
      train_labels = train_labels[validation_size:]
    
      options = dict(dtype=dtype, reshape=reshape, seed=seed)
      
      train = DataSet(train_images, train_labels, **options)
      validation = DataSet(validation_images, validation_labels, **options)
      test = DataSet(test_images, test_labels, **options)
      
      return Datasets(train=train, validation=validation, test=test)
    
    class DataSet(object):
    
      def __init__(self,
                   images,
                   labels,
                   fake_data=False,
                   one_hot=False,
                   dtype=tf.float32,
                   reshape=True,
                   seed=None):
        """Construct a DataSet.
        one_hot arg is used only if fake_data is true.  `dtype` can be either
        `uint8` to leave the input as `[0, 255]`, or `float32` to rescale into
        `[0, 1]`.  Seed arg provides for convenient deterministic testing.
        """
        seed1, seed2 = random_seed.get_seed(seed)
        # If op level seed is not set, use whatever graph level seed is returned
        numpy.random.seed(seed1 if seed is None else seed2)
        dtype = dtypes.as_dtype(dtype).base_dtype
        if dtype not in (dtypes.uint8, dtypes.float32):
          raise TypeError('Invalid image dtype %r, expected uint8 or float32' %
                          dtype)
        if fake_data:
          self._num_examples = 10000
          self.one_hot = one_hot
        else:
          assert images.shape[0] == labels.shape[0], (
              'images.shape: %s labels.shape: %s' % (images.shape, labels.shape))
          self._num_examples = images.shape[0]
    
          # Convert shape from [num examples, rows, columns, depth]
          # to [num examples, rows*columns] (assuming depth == 1)
          if reshape:
            assert images.shape[3] == 1
            images = images.reshape(images.shape[0],
                                    images.shape[1] * images.shape[2])
          if dtype == dtypes.float32:
            # Convert from [0, 255] -> [0.0, 1.0].
            images = images.astype(numpy.float32)
            images = numpy.multiply(images, 1.0 / 255.0)
        self._images = images
        self._labels = labels
        self._epochs_completed = 0
        self._index_in_epoch = 0
    
      @property
      def images(self):
        return self._images
    
      @property
      def labels(self):
        return self._labels
    
      @property
      def num_examples(self):
        return self._num_examples
    
      @property
      def epochs_completed(self):
        return self._epochs_completed
    
      def next_batch(self, batch_size, fake_data=False, shuffle=True):
        """Return the next `batch_size` examples from this data set."""
        if fake_data:
          fake_image = [1] * 784
          if self.one_hot:
            fake_label = [1] + [0] * 9
          else:
            fake_label = 0
          return [fake_image for _ in xrange(batch_size)], [
              fake_label for _ in xrange(batch_size)
          ]
        start = self._index_in_epoch
        # Shuffle for the first epoch
        if self._epochs_completed == 0 and start == 0 and shuffle:
          perm0 = numpy.arange(self._num_examples)
          numpy.random.shuffle(perm0)
          self._images = self.images[perm0]
          self._labels = self.labels[perm0]
        # Go to the next epoch
        if start + batch_size > self._num_examples:
          # Finished epoch
          self._epochs_completed += 1
          # Get the rest examples in this epoch
          rest_num_examples = self._num_examples - start
          images_rest_part = self._images[start:self._num_examples]
          labels_rest_part = self._labels[start:self._num_examples]
          # Shuffle the data
          if shuffle:
            perm = numpy.arange(self._num_examples)
            numpy.random.shuffle(perm)
            self._images = self.images[perm]
            self._labels = self.labels[perm]
          # Start next epoch
          start = 0
          self._index_in_epoch = batch_size - rest_num_examples
          end = self._index_in_epoch
          images_new_part = self._images[start:end]
          labels_new_part = self._labels[start:end]
          return numpy.concatenate((images_rest_part, images_new_part), axis=0) , numpy.concatenate((labels_rest_part, labels_new_part), axis=0)
        else:
          self._index_in_epoch += batch_size
          end = self._index_in_epoch
          return self._images[start:end], self._labels[start:end]
    
    ################################################################
    
    
    mnist = read_data_sets('MNIST_data', one_hot=True)
    
    def add_layer(inputs, in_size, out_size, activation_function=None):
      weights = tf.Variable(tf.random_normal([in_size, out_size]))
      #biases = tf.Variable(tf.zeros([1, out_size]) + 100)
      biases = tf.Variable(tf.zeros([1, out_size]) + 0.1)
      wx_b = tf.matmul(inputs, weights) + biases
      if activation_function is None:
        outputs = wx_b
      else:
        outputs = activation_function(wx_b,)
      return outputs
    
    xs = tf.placeholder(tf.float32, [None, 28*28])
    ys = tf.placeholder(tf.float32, [None, 10])
    
    def compute_accuracy(v_xs, v_ys):
      global prediction
      y_pre = sess.run(prediction, feed_dict={xs: v_xs})
      correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1))
      accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
      result = sess.run(accuracy, feed_dict={xs:v_xs, ys:v_ys})
      return result
    
    #layer1 = add_layer(xs, 784, 50, activation_function = tf.nn.tanh)
    #layer1 = add_layer(xs, 784, 30, activation_function = tf.nn.sigmoid)
    layer1 = add_layer(xs, 784, 50, activation_function = tf.nn.relu)
    layer2 = add_layer(layer1, 50, 50, activation_function = tf.nn.tanh)
    layer3 = add_layer(layer2, 50, 50, activation_function = tf.nn.tanh)
    layer4 = add_layer(layer3, 50, 50, activation_function = tf.nn.tanh)
    
    prediction = add_layer(layer4, 50, 10, activation_function = tf.nn.softmax)
    
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), 
                                  reduction_indices=[1]))
    
    train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)
    
    sess = tf.Session()
    
    sess.run(tf.initialize_all_variables())
    
    for i in range(100000):
      batch_xs, batch_ys = mnist.train.next_batch(100)
      sess.run(layer1,  feed_dict = {xs: batch_xs, ys: batch_ys})
      sess.run(train_step, feed_dict = {xs: batch_xs, ys:batch_ys})
      if i % 100 ==0:
        print "step:",i,", ",compute_accuracy(mnist.test.images, mnist.test.labels)
    
    
      File "<ipython-input-13-eaeb1867aa2d>", line 286
        print "step:",i,", ",compute_accuracy(mnist.test.images, mnist.test.labels)
                    ^
    SyntaxError: invalid syntax
    
    # ex-11.py
    import tensorflow as tf
    from sklearn.datasets import load_digits
    from sklearn.cross_validation import train_test_split
    from sklearn.preprocessing import LabelBinarizer
    
    digits = load_digits()
    x = digits.data
    y = digits.target
    #one-hot transform
    y = LabelBinarizer().fit_transform(y)
    x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)
    
    def add_layer(inputs, in_size, out_size, layer_name, activation_function = None):
      weights = tf.Variable(tf.random_normal([in_size, out_size]))
      biases = tf.Variable(tf.zeros([1, out_size]) + 0.1,)
      wx_b = tf.matmul(inputs, weights) + biases
    
      if activation_function is None:
        outputs = wx_b
      else:
        outputs = activation_function(wx_b)
      tf.summary.histogram(layer_name + '/outputs', outputs)
      return outputs
    
    xs = tf.placeholder(tf.float32, [None, 8*8])
    ys = tf.placeholder(tf.float32, [None, 10])
    
    layer1 = add_layer(xs, 64, 100, 'layer1', activation_function = tf.nn.tanh)
    prediction = add_layer(layer1, 100, 10, 'layer2', activation_function = tf.nn.softmax)
    
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), reduction_indices=[1]))
    
    tf.summary.scalar('loss', cross_entropy)
    
    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)
    
    sess = tf.Session()
    merged = tf.summary.merge_all()
    
    train_writer = tf.train.SummaryWriter("logs/train", sess.graph)
    test_writer = tf.train.SummaryWriter("logs/test", sess.graph)
    
    sess.run(tf.initialize_all_variables())
    
    for i in range(1000):
      sess.run(train_step, feed_dict = {xs: x_train, ys:y_train})
      if i % 100==0:
        train_result = sess.run(merged, feed_dict={xs:x_train, ys:y_train})
        test_result = sess.run(merged, feed_dict={xs:x_test, ys:y_test})
        train_writer.add_summary(train_result, i)
        test_writer.add_summary(test_result, i)
    
    
    C:ProgramDataAnaconda3libsite-packagessklearncross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
      "This module will be removed in 0.20.", DeprecationWarning)
    
    
    
    ---------------------------------------------------------------------------
    
    AttributeError                            Traceback (most recent call last)
    
    <ipython-input-14-212ae64f528b> in <module>()
         39 merged = tf.summary.merge_all()
         40 
    ---> 41 train_writer = tf.train.SummaryWriter("logs/train", sess.graph)
         42 test_writer = tf.train.SummaryWriter("logs/test", sess.graph)
         43 
    
    
    AttributeError: module 'tensorflow.python.training.training' has no attribute 'SummaryWriter'
    
    # ex-11-dropout.py
    import tensorflow as tf
    from sklearn.datasets import load_digits
    from sklearn.cross_validation import train_test_split
    from sklearn.preprocessing import LabelBinarizer
    
    digits = load_digits()
    x = digits.data
    y = digits.target
    #one-hot transform
    y = LabelBinarizer().fit_transform(y)
    x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3)
    
    def add_layer(inputs, in_size, out_size, layer_name, activation_function = None):
      weights = tf.Variable(tf.random_normal([in_size, out_size]))
      biases = tf.Variable(tf.zeros([1, out_size]) + 0.1,)
      wx_b = tf.matmul(inputs, weights) + biases
      #drop out and keep a proportion
      wx_b = tf.nn.dropout(wx_b, keep_prob)
    
      if activation_function is None:
        outputs = wx_b
      else:
        outputs = activation_function(wx_b)
      tf.histogram_summary(layer_name + '/outputs', outputs)
      return outputs
    
    keep_prob = tf.placeholder(tf.float32)
    xs = tf.placeholder(tf.float32, [None, 8*8])
    ys = tf.placeholder(tf.float32, [None, 10])
    
    layer1 = add_layer(xs, 64, 100, 'layer1', activation_function = tf.nn.tanh)
    prediction = add_layer(layer1, 100, 10, 'layer2', activation_function = tf.nn.softmax)
    
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), reduction_indices=[1]))
    
    tf.scalar_summary('loss', cross_entropy)
    
    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)
    
    sess = tf.Session()
    merged = tf.merge_all_summaries()
    
    train_writer = tf.train.SummaryWriter("logs/train", sess.graph)
    test_writer = tf.train.SummaryWriter("logs/test", sess.graph)
    
    sess.run(tf.initialize_all_variables())
    
    for i in range(1000):
      sess.run(train_step, feed_dict = {xs: x_train, ys:y_train, keep_prob:0.7})
      if i % 100==0:
        train_result = sess.run(merged, feed_dict={xs:x_train, ys:y_train, keep_prob: 0.7})
        test_result = sess.run(merged, feed_dict={xs:x_test, ys:y_test, keep_prob: 0.7})
        train_writer.add_summary(train_result, i)
        test_writer.add_summary(test_result, i)
    
    ---------------------------------------------------------------------------
    
    AttributeError                            Traceback (most recent call last)
    
    <ipython-input-15-482298ce8d49> in <module>()
         30 ys = tf.placeholder(tf.float32, [None, 10])
         31 
    ---> 32 layer1 = add_layer(xs, 64, 100, 'layer1', activation_function = tf.nn.tanh)
         33 prediction = add_layer(layer1, 100, 10, 'layer2', activation_function = tf.nn.softmax)
         34 
    
    
    <ipython-input-15-482298ce8d49> in add_layer(inputs, in_size, out_size, layer_name, activation_function)
         23   else:
         24     outputs = activation_function(wx_b)
    ---> 25   tf.histogram_summary(layer_name + '/outputs', outputs)
         26   return outputs
         27 
    
    
    AttributeError: module 'tensorflow' has no attribute 'histogram_summary'
    

    input_data.py

    # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    # ==============================================================================
    
    """Functions for downloading and reading MNIST data."""
    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function
    
    import gzip
    import os
    import tempfile
    
    import numpy
    from six.moves import urllib
    from six.moves import xrange  # pylint: disable=redefined-builtin
    import tensorflow as tf
    from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
    

    mnist.py

    # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    # ==============================================================================
    
    """Functions for downloading and reading MNIST data."""
    
    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function
    
    import gzip
    
    import numpy
    from six.moves import xrange  # pylint: disable=redefined-builtin
    
    from tensorflow.contrib.learn.python.learn.datasets import base
    from tensorflow.python.framework import dtypes
    from tensorflow.python.framework import random_seed
    
    # CVDF mirror of http://yann.lecun.com/exdb/mnist/
    SOURCE_URL = 'https://storage.googleapis.com/cvdf-datasets/mnist/'
    
    
    def _read32(bytestream):
      dt = numpy.dtype(numpy.uint32).newbyteorder('>')
      return numpy.frombuffer(bytestream.read(4), dtype=dt)[0]
    
    
    def extract_images(f):
      """Extract the images into a 4D uint8 numpy array [index, y, x, depth].
    
      Args:
        f: A file object that can be passed into a gzip reader.
    
      Returns:
        data: A 4D uint8 numpy array [index, y, x, depth].
    
      Raises:
        ValueError: If the bytestream does not start with 2051.
    
      """
      print('Extracting', f.name)
      with gzip.GzipFile(fileobj=f) as bytestream:
        magic = _read32(bytestream)
        if magic != 2051:
          raise ValueError('Invalid magic number %d in MNIST image file: %s' %
                           (magic, f.name))
        num_images = _read32(bytestream)
        rows = _read32(bytestream)
        cols = _read32(bytestream)
        buf = bytestream.read(rows * cols * num_images)
        data = numpy.frombuffer(buf, dtype=numpy.uint8)
        data = data.reshape(num_images, rows, cols, 1)
        return data
    
    
    def dense_to_one_hot(labels_dense, num_classes):
      """Convert class labels from scalars to one-hot vectors."""
      num_labels = labels_dense.shape[0]
      index_offset = numpy.arange(num_labels) * num_classes
      labels_one_hot = numpy.zeros((num_labels, num_classes))
      labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
      return labels_one_hot
    
    
    def extract_labels(f, one_hot=False, num_classes=10):
      """Extract the labels into a 1D uint8 numpy array [index].
    
      Args:
        f: A file object that can be passed into a gzip reader.
        one_hot: Does one hot encoding for the result.
        num_classes: Number of classes for the one hot encoding.
    
      Returns:
        labels: a 1D uint8 numpy array.
    
      Raises:
        ValueError: If the bystream doesn't start with 2049.
      """
      print('Extracting', f.name)
      with gzip.GzipFile(fileobj=f) as bytestream:
        magic = _read32(bytestream)
        if magic != 2049:
          raise ValueError('Invalid magic number %d in MNIST label file: %s' %
                           (magic, f.name))
        num_items = _read32(bytestream)
        buf = bytestream.read(num_items)
        labels = numpy.frombuffer(buf, dtype=numpy.uint8)
        if one_hot:
          return dense_to_one_hot(labels, num_classes)
        return labels
    
    
    class DataSet(object):
    
      def __init__(self,
                   images,
                   labels,
                   fake_data=False,
                   one_hot=False,
                   dtype=dtypes.float32,
                   reshape=True,
                   seed=None):
        """Construct a DataSet.
        one_hot arg is used only if fake_data is true.  `dtype` can be either
        `uint8` to leave the input as `[0, 255]`, or `float32` to rescale into
        `[0, 1]`.  Seed arg provides for convenient deterministic testing.
        """
        seed1, seed2 = random_seed.get_seed(seed)
        # If op level seed is not set, use whatever graph level seed is returned
        numpy.random.seed(seed1 if seed is None else seed2)
        dtype = dtypes.as_dtype(dtype).base_dtype
        if dtype not in (dtypes.uint8, dtypes.float32):
          raise TypeError('Invalid image dtype %r, expected uint8 or float32' %
                          dtype)
        if fake_data:
          self._num_examples = 10000
          self.one_hot = one_hot
        else:
          assert images.shape[0] == labels.shape[0], (
              'images.shape: %s labels.shape: %s' % (images.shape, labels.shape))
          self._num_examples = images.shape[0]
    
          # Convert shape from [num examples, rows, columns, depth]
          # to [num examples, rows*columns] (assuming depth == 1)
          if reshape:
            assert images.shape[3] == 1
            images = images.reshape(images.shape[0],
                                    images.shape[1] * images.shape[2])
          if dtype == dtypes.float32:
            # Convert from [0, 255] -> [0.0, 1.0].
            images = images.astype(numpy.float32)
            images = numpy.multiply(images, 1.0 / 255.0)
        self._images = images
        self._labels = labels
        self._epochs_completed = 0
        self._index_in_epoch = 0
    
      @property
      def images(self):
        return self._images
    
      @property
      def labels(self):
        return self._labels
    
      @property
      def num_examples(self):
        return self._num_examples
    
      @property
      def epochs_completed(self):
        return self._epochs_completed
    
      def next_batch(self, batch_size, fake_data=False, shuffle=True):
        """Return the next `batch_size` examples from this data set."""
        if fake_data:
          fake_image = [1] * 784
          if self.one_hot:
            fake_label = [1] + [0] * 9
          else:
            fake_label = 0
          return [fake_image for _ in xrange(batch_size)], [
              fake_label for _ in xrange(batch_size)
          ]
        start = self._index_in_epoch
        # Shuffle for the first epoch
        if self._epochs_completed == 0 and start == 0 and shuffle:
          perm0 = numpy.arange(self._num_examples)
          numpy.random.shuffle(perm0)
          self._images = self.images[perm0]
          self._labels = self.labels[perm0]
        # Go to the next epoch
        if start + batch_size > self._num_examples:
          # Finished epoch
          self._epochs_completed += 1
          # Get the rest examples in this epoch
          rest_num_examples = self._num_examples - start
          images_rest_part = self._images[start:self._num_examples]
          labels_rest_part = self._labels[start:self._num_examples]
          # Shuffle the data
          if shuffle:
            perm = numpy.arange(self._num_examples)
            numpy.random.shuffle(perm)
            self._images = self.images[perm]
            self._labels = self.labels[perm]
          # Start next epoch
          start = 0
          self._index_in_epoch = batch_size - rest_num_examples
          end = self._index_in_epoch
          images_new_part = self._images[start:end]
          labels_new_part = self._labels[start:end]
          return numpy.concatenate((images_rest_part, images_new_part), axis=0) , numpy.concatenate((labels_rest_part, labels_new_part), axis=0)
        else:
          self._index_in_epoch += batch_size
          end = self._index_in_epoch
          return self._images[start:end], self._labels[start:end]
    
    
    def read_data_sets(train_dir,
                       fake_data=False,
                       one_hot=False,
                       dtype=dtypes.float32,
                       reshape=True,
                       validation_size=5000,
                       seed=None):
      if fake_data:
    
        def fake():
          return DataSet(
              [], [], fake_data=True, one_hot=one_hot, dtype=dtype, seed=seed)
    
        train = fake()
        validation = fake()
        test = fake()
        return base.Datasets(train=train, validation=validation, test=test)
    
      TRAIN_IMAGES = 'train-images-idx3-ubyte.gz'
      TRAIN_LABELS = 'train-labels-idx1-ubyte.gz'
      TEST_IMAGES = 't10k-images-idx3-ubyte.gz'
      TEST_LABELS = 't10k-labels-idx1-ubyte.gz'
    
      local_file = base.maybe_download(TRAIN_IMAGES, train_dir,
                                       SOURCE_URL + TRAIN_IMAGES)
      with open(local_file, 'rb') as f:
        train_images = extract_images(f)
    
      local_file = base.maybe_download(TRAIN_LABELS, train_dir,
                                       SOURCE_URL + TRAIN_LABELS)
      with open(local_file, 'rb') as f:
        train_labels = extract_labels(f, one_hot=one_hot)
    
      local_file = base.maybe_download(TEST_IMAGES, train_dir,
                                       SOURCE_URL + TEST_IMAGES)
      with open(local_file, 'rb') as f:
        test_images = extract_images(f)
    
      local_file = base.maybe_download(TEST_LABELS, train_dir,
                                       SOURCE_URL + TEST_LABELS)
      with open(local_file, 'rb') as f:
        test_labels = extract_labels(f, one_hot=one_hot)
    
      if not 0 <= validation_size <= len(train_images):
        raise ValueError(
            'Validation size should be between 0 and {}. Received: {}.'
            .format(len(train_images), validation_size))
    
      validation_images = train_images[:validation_size]
      validation_labels = train_labels[:validation_size]
      train_images = train_images[validation_size:]
      train_labels = train_labels[validation_size:]
    
      
      options = dict(dtype=dtype, reshape=reshape, seed=seed)
      
      train = DataSet(train_images, train_labels, **options)
      validation = DataSet(validation_images, validation_labels, **options)
      test = DataSet(test_images, test_labels, **options)
      
      return base.Datasets(train=train, validation=validation, test=test)
    
    
    def load_mnist(train_dir='MNIST-data'):
      return read_data_sets(train_dir)
    
    
  • 相关阅读:
    主机连接虚拟机redis 服务器
    在dockers中调试dump的dotnet程序
    我的devops实践经验分享一二
    【nodejs】让nodejs像后端mvc框架(asp.net mvc)一样处理请求--请求处理结果适配篇(7/8)
    【nodejs】让nodejs像后端mvc框架(asp.net mvc)一样处理请求--参数自动映射篇(6/8)
    【nodejs】让nodejs像后端mvc框架(asp.net mvc)一样处理请求--请求处理函数装饰器注册篇(5/8)【controller+action】
    【nodejs】让nodejs像后端mvc框架(asp.net mvc)一样处理请求--控制器和处理函数的注册篇(4/8)【controller+action】
    【nodejs】让nodejs像后端mvc框架(asp.net mvc )一样处理请求--控制器的声明定义和发现篇(3/8)
    【nodejs】让nodejs像后端mvc框架(asp.net mvc )一样处理请求--路由限制及选择篇(2/8)【route】
    【nodejs】让nodejs像后端mvc框架(asp.net mvc)一样处理请求--目录(8/8 完结)
  • 原文地址:https://www.cnblogs.com/q735613050/p/7745962.html
Copyright © 2011-2022 走看看