zoukankan      html  css  js  c++  java
  • TensorFlow-相关 API(学习笔记 )

    1.tf.nn.conv2d

    conv2d(
        input,
        filter,
        strides,
        padding,
        use_cudnn_on_gpu=True,
        data_format='NHWC',
        name=None
    )
    
    参数名必选类型说明
    input tensor 是一个 4 维的 tensor,即 [ batch, in_height, in_width, in_channels ](若 input 是图像,[ 训练时一个 batch 的图片数量, 图片高度, 图片宽度, 图像通道数 ])
    filter tensor 是一个 4 维的 tensor,即 [ filter_height, filter_width, in_channels, out_channels ](若 input 是图像,[ 卷积核的高度,卷积核的宽度,图像通道数,卷积核个数 ]),filter 的 in_channels 必须和 input 的 in_channels 相等
    strides 列表 长度为 4 的 list,卷积时候在 input 上每一维的步长,一般 strides[0] = strides[3] = 1
    padding string 只能为 " VALID "," SAME " 中之一,这个值决定了不同的卷积方式。VALID 丢弃方式;SAME:补全方式
    use_cudnn_on_gpu bool 是否使用 cudnn 加速,默认为 true
    data_format string 只能是 " NHWC ", " NCHW ",默认 " NHWC "
    name string 运算名称

    创建conv2d.py

    import tensorflow as tf
    
    a = tf.constant([1,1,1,0,0,0,1,1,1,0,0,0,1,1,1,0,0,1,1,0,0,1,1,0,0],dtype=tf.float32,shape=[1,5,5,1])
    b = tf.constant([1,0,1,0,1,0,1,0,1],dtype=tf.float32,shape=[3,3,1,1])
    c = tf.nn.conv2d(a,b,strides=[1, 2, 2, 1],padding='VALID')
    d = tf.nn.conv2d(a,b,strides=[1, 2, 2, 1],padding='SAME')
    with tf.Session() as sess:
        print ("c shape:")
        print (c.shape)
        print ("c value:")
        print (sess.run(c))
        print ("d shape:")
        print (d.shape)
        print ("d value:")
        print (sess.run(d))
    

     执行结果:

    c shape:
    (1, 2, 2, 1)
    c value:
    [[[[ 4.]
       [ 4.]]
    
      [[ 2.]
       [ 4.]]]]
    d shape:
    (1, 3, 3, 1)
    d value:
    [[[[ 2.]
       [ 3.]
       [ 1.]]
    
      [[ 1.]
       [ 4.]
       [ 3.]]
    
      [[ 0.]
       [ 2.]
       [ 1.]]]]
    

     2.tf.nn.relu

    relu(
        features,
        name=None
    )
    
    参数名必选类型说明
    features tensor 是以下类型float32, float64, int32, int64, uint8, int16, int8, uint16, half
    name string 运算名称

    创建源文件 relu.py

    import tensorflow as tf
    
    a = tf.constant([1,-2,0,4,-5,6])
    b = tf.nn.relu(a)
    with tf.Session() as sess:
        print (sess.run(b))
    

      执行结果:

    [1 0 0 4 0 6]
    

     3.tf.nn.max_pool

    max_pool(
        value,
        ksize,
        strides,
        padding,
        data_format='NHWC',
        name=None
    )
    
    参数名必选类型说明
    value tensor 4 维的张量,即 [ batch, height, width, channels ],数据类型为 tf.float32
    ksize 列表 池化窗口的大小,长度为 4 的 list,一般是 [1, height, width, 1],因为不在 batch 和 channels 上做池化,所以第一个和最后一个维度为 1
    strides 列表 池化窗口在每一个维度上的步长,一般 strides[0] = strides[3] = 1
    padding string 只能为 " VALID "," SAME " 中之一,这个值决定了不同的池化方式。VALID 丢弃方式;SAME:补全方式
    data_format string 只能是 " NHWC ", " NCHW ",默认" NHWC "
    name string 运算名称

     创建源文件 max_pool.py

    import tensorflow as tf
    
    a = tf.constant([1,3,2,1,2,9,1,1,1,3,2,3,5,6,1,2],dtype=tf.float32,shape=[1,4,4,1])
    b = tf.nn.max_pool(a,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='VALID')
    c = tf.nn.max_pool(a,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME')
    with tf.Session() as sess:
        print ("b shape:")
        print (b.shape)
        print ("b value:")
        print (sess.run(b))
        print ("c shape:")
        print (c.shape)
        print ("c value:")
        print (sess.run(c))
    

    执行结果:

    b shape:
    (1, 2, 2, 1)
    b value:
    [[[[ 9.]
       [ 2.]]
    
      [[ 6.]
       [ 3.]]]]
    c shape:
    (1, 2, 2, 1)
    c value:
    [[[[ 9.]
       [ 2.]]
    
      [[ 6.]
       [ 3.]]]]
    

     4.tf.nn.dropout

    dropout(
        x,
        keep_prob,
        noise_shape=None,
        seed=None,
        name=None
    ) 
    参数名必选类型说明
    x tensor 输出元素是 x 中的元素以 keep_prob 概率除以 keep_prob,否则为 0
    keep_prob scalar Tensor dropout 的概率,一般是占位符
    noise_shape tensor 默认情况下,每个元素是否 dropout 是相互独立。如果指定 noise_shape,若 noise_shape[i] == shape(x)[i],该维度的元素是否 dropout 是相互独立,若 noise_shape[i] != shape(x)[i] 该维度元素是否 dropout 不相互独立,要么一起 dropout 要么一起保留
    seed 数值 如果指定该值,每次 dropout 结果相同
    name string 运算名称

     创建源文件 dropout.py

    import tensorflow as tf
    
    a = tf.constant([1,2,3,4,5,6],shape=[2,3],dtype=tf.float32)
    b = tf.placeholder(tf.float32)
    c = tf.nn.dropout(a,b,[2,1],1)
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        print (sess.run(c,feed_dict={b:0.75}))
    

      执行结果:

    [[ 0.          0.          0.        ]
     [ 5.33333349  6.66666651  8.        ]]
    

    5. tf.nn.sigmoid_cross_entropy_with_logits

    sigmoid_cross_entropy_with_logits(
        _sentinel=None,
        labels=None,
        logits=None,
        name=None
    )
    
    参数名必选类型说明
    _sentinel None 没有使用的参数
    labels Tensor type, shape 与 logits相同
    logits Tensor type 是 float32 或者 float64
    name string 运算名称

      创建源文件 sigmoid_cross_entropy_with_logits.py

    import tensorflow as tf
    x = tf.constant([1,2,3,4,5,6,7],dtype=tf.float64)
    y = tf.constant([1,1,1,0,0,1,0],dtype=tf.float64)
    loss = tf.nn.sigmoid_cross_entropy_with_logits(labels = y,logits = x)
    with tf.Session() as sess:
        print (sess.run(loss))
    

      执行结果:

    [  3.13261688e-01   1.26928011e-01   4.85873516e-02   4.01814993e+00
       5.00671535e+00   2.47568514e-03   7.00091147e+00]
    

     6.tf.truncated_normal

    truncated_normal(
        shape,
        mean=0.0,
        stddev=1.0,
        dtype=tf.float32,
        seed=None,
        name=None
    )
    
    参数名必选类型说明
    shape 1 维整形张量或 array 输出张量的维度
    mean 0 维张量或数值 均值
    stddev 0 维张量或数值 标准差
    dtype dtype 输出类型
    seed 数值 随机种子,若 seed 赋值,每次产生相同随机数
    name string 运算名称

      创建源文件 truncated_normal.py

    import tensorflow as tf
    initial = tf.truncated_normal(shape=[3,3], mean=0, stddev=1)
    print(tf.Session().run(initial))
    

      执行结果:

    [[ 0.18815269 -0.4689253   0.63908994]
     [ 0.01734953 -0.46975166 -0.25023392]
     [ 1.12803638 -1.84143591  0.15422213]]
    

    7.tf.constant

    constant(
        value,
        dtype=None,
        shape=None,
        name='Const',
        verify_shape=False
    )
    
    参数名必选类型说明
    value 常量数值或者 list 输出张量的值
    dtype dtype 输出张量元素类型
    shape 1 维整形张量或 array 输出张量的维度
    name string 张量名称
    verify_shape Boolean 检测 shape 是否和 value 的 shape 一致,若为 Fasle,不一致时,会用最后一个元素将 shape 补全

      创建源文件 constant.py

    #!/usr/bin/python
    
    import tensorflow as tf
    import numpy as np
    a = tf.constant([1,2,3,4,5,6],shape=[2,3])
    b = tf.constant(-1,shape=[3,2])
    c = tf.matmul(a,b)
    
    e = tf.constant(np.arange(1,13,dtype=np.int32),shape=[2,2,3])
    f = tf.constant(np.arange(13,25,dtype=np.int32),shape=[2,3,2])
    g = tf.matmul(e,f)
    with tf.Session() as sess:
        print (sess.run(a))
        print ("##################################")
        print (sess.run(b))
        print ("##################################")
        print (sess.run(c))
        print ("##################################")
        print (sess.run(e))
        print ("##################################")
        print (sess.run(f))
        print ("##################################")
        print (sess.run(g))
    

      执行结果:

    [[1 2 3]
     [4 5 6]]
    ##################################
    [[-1 -1]
     [-1 -1]
     [-1 -1]]
    ##################################
    [[ -6  -6]
     [-15 -15]]
    ##################################
    [[[ 1  2  3]
      [ 4  5  6]]
    
     [[ 7  8  9]
      [10 11 12]]]
    ##################################
    [[[13 14]
      [15 16]
      [17 18]]
    
     [[19 20]
      [21 22]
      [23 24]]]
    ##################################
    [[[ 94 100]
      [229 244]]
    
     [[508 532]
      [697 730]]]
    

     8.tf.placeholder

    placeholder(
        dtype,
        shape=None,
        name=None
    )
    
    参数名必选类型说明
    dtype dtype 占位符数据类型
    shape 1 维整形张量或 array 占位符维度
    name string 占位符名称

      创建源文件 placeholder.py

    #!/usr/bin/python
    
    import tensorflow as tf
    import numpy as np
    
    x = tf.placeholder(tf.float32,[None,3])
    y = tf.matmul(x,x)
    with tf.Session() as sess:
        rand_array = np.random.rand(3,3)
        print(sess.run(y,feed_dict={x:rand_array}))
    

      执行结果:

    [[ 0.64431196  0.68349576  0.57412398]
     [ 0.84553117  1.64796805  0.7788316 ]
     [ 0.84342241  0.8947317   0.8024016 ]]
    

     9.tf.nn.bias_add 将偏差项 bias 加到 value 上面,可以看做是 tf.add 的一个特例,其中 bias 必须是一维的,并且维度和 value 的最后一维相同,数据类型必须和 value 相同。

    bias_add(
        value,
        bias,
        data_format=None,
        name=None
    )
    
    参数名必选类型说明
    value 张量 数据类型为 float, double, int64, int32, uint8, int16, int8, complex64, or complex128
    bias 1 维张量 维度必须和 value 最后一维维度相等
    data_format string 数据格式,支持 ' NHWC ' 和 ' NCHW '
    name string 运算名称

      创建源文件 bias_add.py

    #!/usr/bin/python
    
    import tensorflow as tf
    import numpy as np
    
    a = tf.constant([[1.0, 2.0],[1.0, 2.0],[1.0, 2.0]])
    b = tf.constant([2.0,1.0])
    c = tf.constant([1.0])
    sess = tf.Session()
    print (sess.run(tf.nn.bias_add(a, b)))
    #print (sess.run(tf.nn.bias_add(a,c))) error
    print ("##################################")
    print (sess.run(tf.add(a, b)))
    print ("##################################")
    print (sess.run(tf.add(a, c)))
    

      执行结果:

    [[ 3.  3.]
     [ 3.  3.]
     [ 3.  3.]]
    ##################################
    [[ 3.  3.]
     [ 3.  3.]
     [ 3.  3.]]
    ##################################
    [[ 2.  3.]
     [ 2.  3.]
     [ 2.  3.]]
    

     10.tf.reduce_mean 

    reduce_mean(
        input_tensor,
        axis=None,
        keep_dims=False,
        name=None,
        reduction_indices=None
    )
    
    参数名必选类型说明
    input_tensor 张量 输入待求平均值的张量
    axis None、0、1 None:全局求平均值;0:求每一列平均值;1:求每一行平均值
    keep_dims Boolean 保留原来的维度(例如不会从二维矩阵降为一维向量)
    name string 运算名称
    reduction_indices None 和 axis 等价,被弃用

      创建源文件 reduce_mean.py

    #!/usr/bin/python
    
    import tensorflow as tf
    import numpy as np
    
    initial = [[1.,1.],[2.,2.]]
    x = tf.Variable(initial,dtype=tf.float32)
    init_op = tf.global_variables_initializer()
    with tf.Session() as sess:
        sess.run(init_op)
        print(sess.run(tf.reduce_mean(x)))
        print(sess.run(tf.reduce_mean(x,0))) #Column
        print(sess.run(tf.reduce_mean(x,1))) #row
    

      执行结果:

    1.5
    [ 1.5  1.5]
    [ 1.  2.]
    

     11.tf.squared_difference 计算张量 x、y 对应元素差平方

    squared_difference(
        x,
        y,
        name=None
    )
    
    参数名必选类型说明
    x 张量 是 half, float32, float64, int32, int64, complex64, complex128 其中一种类型
    y 张量 是 half, float32, float64, int32, int64, complex64, complex128 其中一种类型
    name string 运算名称

      创建源文件 squared_difference.py

    #!/usr/bin/python
    
    import tensorflow as tf
    import numpy as np
    
    initial_x = [[1.,1.],[2.,2.]]
    x = tf.Variable(initial_x,dtype=tf.float32)
    initial_y = [[3.,3.],[4.,4.]]
    y = tf.Variable(initial_y,dtype=tf.float32)
    diff = tf.squared_difference(x,y)
    init_op = tf.global_variables_initializer()
    with tf.Session() as sess:
        sess.run(init_op)
        print(sess.run(diff))
    

      执行结果:

    [[ 4.  4.]
     [ 4.  4.]]
    

     12.tf.square 计算张量对应元素平方

    square(
        x,
        name=None
    )
    
    参数名必选类型说明
    x 张量 是 half, float32, float64, int32, int64, complex64, complex128 其中一种类型
    name string 运算名称

      创建源文件 square.py

    #!/usr/bin/python
    import tensorflow as tf
    import numpy as np
    
    initial_x = [[1.,1.],[2.,2.]]
    x = tf.Variable(initial_x,dtype=tf.float32)
    x2 = tf.square(x)
    init_op = tf.global_variables_initializer()
    with tf.Session() as sess:
        sess.run(init_op)
        print(sess.run(x2))
    

      执行结果:

    [[ 1.  1.]
     [ 4.  4.]]
    

     13.tf.Variable 维护图在执行过程中的状态信息,例如神经网络权重值的变化。

    __init__(
        initial_value=None,
        trainable=True,
        collections=None,
        validate_shape=True,
        caching_device=None,
        name=None,
        variable_def=None,
        dtype=None,
        expected_shape=None,
        import_scope=None
    )
    
    参数名类型说明
    initial_value 张量 Variable 类的初始值,这个变量必须指定 shape 信息,否则后面 validate_shape 需设为 False
    trainable Boolean 是否把变量添加到 collection GraphKeys.TRAINABLE_VARIABLES 中(collection 是一种全局存储,不受变量名生存空间影响,一处保存,到处可取)
    collections Graph collections 全局存储,默认是 GraphKeys.GLOBAL_VARIABLES
    validate_shape Boolean 是否允许被未知维度的 initial_value 初始化
    caching_device string 指明哪个 device 用来缓存变量
    name string 变量名
    dtype dtype 如果被设置,初始化的值就会按照这个类型初始化
    expected_shape TensorShape 要是设置了,那么初始的值会是这种维度

      创建源文件 Variable.py

    #!/usr/bin/python
    
    import tensorflow as tf
    initial = tf.truncated_normal(shape=[10,10],mean=0,stddev=1)
    W=tf.Variable(initial)
    list = [[1.,1.],[2.,2.]]
    X = tf.Variable(list,dtype=tf.float32)
    init_op = tf.global_variables_initializer()
    with tf.Session() as sess:
        sess.run(init_op)
        print ("##################(1)################")
        print (sess.run(W))
        print ("##################(2)################")
        print (sess.run(W[:2,:2]))
        op = W[:2,:2].assign(22.*tf.ones((2,2)))
        print ("###################(3)###############")
        print (sess.run(op))
        print ("###################(4)###############")
        print (W.eval(sess)) #computes and returns the value of this variable
        print ("####################(5)##############")
        print (W.eval())  #Usage with the default session
        print ("#####################(6)#############")
        print (W.dtype)
        print (sess.run(W.initial_value))
        print (sess.run(W.op))
        print (W.shape)
        print ("###################(7)###############")
        print (sess.run(X))
    

      执行结果:

    ##################(1)################
    [[-0.14252207  0.43376675  0.75065768  0.89276749  1.16391671  0.39532429
      -0.56278807 -0.49753642  0.23130737 -0.51338279]
     [-0.43028545 -1.24873769 -0.73239309  0.434468   -0.97399759  0.13766721
      -0.6361087  -0.82712436  1.71831048 -0.44968474]
     [-0.96064204 -0.83682173  0.26545268  0.22578485  0.65014762 -0.30830157
      -1.57317054 -0.35661098  1.40849245 -0.37030414]
     [-0.37272176  0.73461288  0.39292559 -1.40008056 -0.37535539  0.24140523
       1.6811192  -0.48886588  1.15467834  0.61565816]
     [-0.39579329 -0.23154807 -1.01895738 -0.95105737  1.24795806 -0.03846256
      -1.71738017 -0.80132687  0.53553152 -0.06413679]
     [-0.97320521 -0.24279226  1.36213648  1.56002438 -1.11646473 -0.35991025
       0.91412318  0.97508883 -1.16207206 -0.68734062]
     [ 0.49044254 -1.87386227 -0.70803815 -0.6591838   0.08034691 -1.24559033
      -0.29389012 -0.2189652  -1.08279467 -0.0175346 ]
     [-0.5608176   1.08259249  1.66278481 -0.33977437  0.42875817  0.55927169
       0.76387608  0.37792665  0.85006535  1.05124724]
     [ 1.75331545 -0.6333124  -0.10046791 -0.1780251  -1.31002116  1.90098214
       0.84569824 -1.42502522 -0.67300171  0.68910873]
     [-1.7385     -0.9806214  -0.32636395 -0.50020444 -0.53104508 -0.33903483
      -0.35751811 -0.03737256 -1.26822579 -1.38264406]]
    ##################(2)################
    [[-0.14252207  0.43376675]
     [-0.43028545 -1.24873769]]
    ###################(3)###############
    [[  2.20000000e+01   2.20000000e+01   7.50657678e-01   8.92767489e-01
        1.16391671e+00   3.95324290e-01  -5.62788069e-01  -4.97536421e-01
        2.31307372e-01  -5.13382792e-01]
     [  2.20000000e+01   2.20000000e+01  -7.32393086e-01   4.34468001e-01
       -9.73997593e-01   1.37667209e-01  -6.36108696e-01  -8.27124357e-01
        1.71831048e+00  -4.49684739e-01]
     [ -9.60642040e-01  -8.36821735e-01   2.65452683e-01   2.25784853e-01
        6.50147617e-01  -3.08301568e-01  -1.57317054e+00  -3.56610984e-01
        1.40849245e+00  -3.70304137e-01]
     [ -3.72721761e-01   7.34612882e-01   3.92925590e-01  -1.40008056e+00
       -3.75355393e-01   2.41405234e-01   1.68111920e+00  -4.88865882e-01
        1.15467834e+00   6.15658164e-01]
     [ -3.95793289e-01  -2.31548071e-01  -1.01895738e+00  -9.51057374e-01
        1.24795806e+00  -3.84625569e-02  -1.71738017e+00  -8.01326871e-01
        5.35531521e-01  -6.41367882e-02]
     [ -9.73205209e-01  -2.42792264e-01   1.36213648e+00   1.56002438e+00
       -1.11646473e+00  -3.59910250e-01   9.14123178e-01   9.75088835e-01
       -1.16207206e+00  -6.87340617e-01]
     [  4.90442544e-01  -1.87386227e+00  -7.08038151e-01  -6.59183800e-01
        8.03469121e-02  -1.24559033e+00  -2.93890119e-01  -2.18965203e-01
       -1.08279467e+00  -1.75346043e-02]
     [ -5.60817599e-01   1.08259249e+00   1.66278481e+00  -3.39774370e-01
        4.28758174e-01   5.59271693e-01   7.63876081e-01   3.77926648e-01
        8.50065351e-01   1.05124724e+00]
     [  1.75331545e+00  -6.33312404e-01  -1.00467913e-01  -1.78025097e-01
       -1.31002116e+00   1.90098214e+00   8.45698237e-01  -1.42502522e+00
       -6.73001707e-01   6.89108729e-01]
     [ -1.73850000e+00  -9.80621397e-01  -3.26363951e-01  -5.00204444e-01
       -5.31045079e-01  -3.39034826e-01  -3.57518107e-01  -3.73725556e-02
       -1.26822579e+00  -1.38264406e+00]]
    ###################(4)###############
    [[  2.20000000e+01   2.20000000e+01   7.50657678e-01   8.92767489e-01
        1.16391671e+00   3.95324290e-01  -5.62788069e-01  -4.97536421e-01
        2.31307372e-01  -5.13382792e-01]
     [  2.20000000e+01   2.20000000e+01  -7.32393086e-01   4.34468001e-01
       -9.73997593e-01   1.37667209e-01  -6.36108696e-01  -8.27124357e-01
        1.71831048e+00  -4.49684739e-01]
     [ -9.60642040e-01  -8.36821735e-01   2.65452683e-01   2.25784853e-01
        6.50147617e-01  -3.08301568e-01  -1.57317054e+00  -3.56610984e-01
        1.40849245e+00  -3.70304137e-01]
     [ -3.72721761e-01   7.34612882e-01   3.92925590e-01  -1.40008056e+00
       -3.75355393e-01   2.41405234e-01   1.68111920e+00  -4.88865882e-01
        1.15467834e+00   6.15658164e-01]
     [ -3.95793289e-01  -2.31548071e-01  -1.01895738e+00  -9.51057374e-01
        1.24795806e+00  -3.84625569e-02  -1.71738017e+00  -8.01326871e-01
        5.35531521e-01  -6.41367882e-02]
     [ -9.73205209e-01  -2.42792264e-01   1.36213648e+00   1.56002438e+00
       -1.11646473e+00  -3.59910250e-01   9.14123178e-01   9.75088835e-01
       -1.16207206e+00  -6.87340617e-01]
     [  4.90442544e-01  -1.87386227e+00  -7.08038151e-01  -6.59183800e-01
        8.03469121e-02  -1.24559033e+00  -2.93890119e-01  -2.18965203e-01
       -1.08279467e+00  -1.75346043e-02]
     [ -5.60817599e-01   1.08259249e+00   1.66278481e+00  -3.39774370e-01
        4.28758174e-01   5.59271693e-01   7.63876081e-01   3.77926648e-01
        8.50065351e-01   1.05124724e+00]
     [  1.75331545e+00  -6.33312404e-01  -1.00467913e-01  -1.78025097e-01
       -1.31002116e+00   1.90098214e+00   8.45698237e-01  -1.42502522e+00
       -6.73001707e-01   6.89108729e-01]
     [ -1.73850000e+00  -9.80621397e-01  -3.26363951e-01  -5.00204444e-01
       -5.31045079e-01  -3.39034826e-01  -3.57518107e-01  -3.73725556e-02
       -1.26822579e+00  -1.38264406e+00]]
    ####################(5)##############
    [[  2.20000000e+01   2.20000000e+01   7.50657678e-01   8.92767489e-01
        1.16391671e+00   3.95324290e-01  -5.62788069e-01  -4.97536421e-01
        2.31307372e-01  -5.13382792e-01]
     [  2.20000000e+01   2.20000000e+01  -7.32393086e-01   4.34468001e-01
       -9.73997593e-01   1.37667209e-01  -6.36108696e-01  -8.27124357e-01
        1.71831048e+00  -4.49684739e-01]
     [ -9.60642040e-01  -8.36821735e-01   2.65452683e-01   2.25784853e-01
        6.50147617e-01  -3.08301568e-01  -1.57317054e+00  -3.56610984e-01
        1.40849245e+00  -3.70304137e-01]
     [ -3.72721761e-01   7.34612882e-01   3.92925590e-01  -1.40008056e+00
       -3.75355393e-01   2.41405234e-01   1.68111920e+00  -4.88865882e-01
        1.15467834e+00   6.15658164e-01]
     [ -3.95793289e-01  -2.31548071e-01  -1.01895738e+00  -9.51057374e-01
        1.24795806e+00  -3.84625569e-02  -1.71738017e+00  -8.01326871e-01
        5.35531521e-01  -6.41367882e-02]
     [ -9.73205209e-01  -2.42792264e-01   1.36213648e+00   1.56002438e+00
       -1.11646473e+00  -3.59910250e-01   9.14123178e-01   9.75088835e-01
       -1.16207206e+00  -6.87340617e-01]
     [  4.90442544e-01  -1.87386227e+00  -7.08038151e-01  -6.59183800e-01
        8.03469121e-02  -1.24559033e+00  -2.93890119e-01  -2.18965203e-01
       -1.08279467e+00  -1.75346043e-02]
     [ -5.60817599e-01   1.08259249e+00   1.66278481e+00  -3.39774370e-01
        4.28758174e-01   5.59271693e-01   7.63876081e-01   3.77926648e-01
        8.50065351e-01   1.05124724e+00]
     [  1.75331545e+00  -6.33312404e-01  -1.00467913e-01  -1.78025097e-01
       -1.31002116e+00   1.90098214e+00   8.45698237e-01  -1.42502522e+00
       -6.73001707e-01   6.89108729e-01]
     [ -1.73850000e+00  -9.80621397e-01  -3.26363951e-01  -5.00204444e-01
       -5.31045079e-01  -3.39034826e-01  -3.57518107e-01  -3.73725556e-02
       -1.26822579e+00  -1.38264406e+00]]
    #####################(6)#############
    <dtype: 'float32_ref'>
    [[ -1.33923304e+00   3.98314148e-01  -1.05487180e+00  -2.22615644e-01
        7.82311618e-01   9.53226268e-01  -2.97039151e-01  -3.89685869e-01
        8.23029280e-01   7.19715893e-01]
     [  1.04759359e+00   8.69891942e-01  -5.51353216e-01  -4.16979402e-01
       -8.62451375e-01  -1.88378954e+00   1.63407588e+00  -1.31232488e+00
       -1.96803153e+00  -4.86700237e-01]
     [ -3.07712853e-01   9.84556377e-02   4.30263966e-01   1.04724443e+00
        7.22615659e-01  -5.49771845e-01  -1.07801104e+00  -3.93206358e-01
        7.11512685e-01   9.57030654e-01]
     [ -1.05264592e+00   6.57385737e-02   7.53750354e-02   1.01429641e+00
       -8.63034368e-01   1.23717473e-03  -6.88091516e-01  -3.96133095e-01
        8.48116100e-01  -9.45674896e-01]
     [  5.37974119e-01   4.54147071e-01  -2.98751473e-01  -1.59583509e+00
        4.50350285e-01   6.21135473e-01  -1.53476131e+00  -1.97713211e-01
        8.77439082e-01   4.83142734e-01]
     [ -5.70582092e-01  -5.23053765e-01   1.98891927e-02   8.01557481e-01
       -3.45719725e-01   1.27735651e+00   1.71628571e+00  -7.70039737e-01
        6.76081061e-01   8.73943627e-01]
     [  1.96820140e+00  -9.69326258e-01  -7.51312554e-01  -1.13384604e+00
       -6.39117777e-01  -7.42796242e-01   9.72097814e-01   1.74299920e+00
        7.48745322e-01   2.23225936e-01]
     [ -2.75906771e-01  -1.16707611e+00  -1.25743651e+00  -7.03301072e-01
       -1.98549139e+00  -7.08913743e-01  -3.58558416e-01   3.72454494e-01
       -5.64896911e-02   8.41890931e-01]
     [ -1.32631826e+00   8.53675187e-01  -1.28031313e-01   2.12832183e-01
       -2.22371653e-01  -9.89087045e-01   2.03618892e-02  -1.93884909e+00
       -1.28941548e+00   2.91048825e-01]
     [ -1.33420026e+00   5.87837324e-02  -1.05547898e-01   2.05826104e-01
        1.52838349e+00   1.29717004e+00  -5.17632477e-02  -1.08887863e+00
       -3.42454642e-01  -1.61216035e-02]]
    None
    (10, 10)
    ###################(7)###############
    [[ 1.  1.]
     [ 2.  2.]]
    
  • 相关阅读:
    面试题48:不能被继承的类
    Scrapy使用问题整理(转载)
    Shell 基础笔记
    python oop面向对象笔记
    python3 logging 日志记录模块
    Github设置
    Django Ajax提交数据请求
    Python常见面试题
    python2 安装scrapy出现错误提示解决办法~
    Windows下安装python2和python3双版本
  • 原文地址:https://www.cnblogs.com/gnool/p/8196803.html
Copyright © 2011-2022 走看看