zoukankan      html  css  js  c++  java
  • 『TensorFlow』函数查询列表_张量属性调整

    数据类型转换Casting

    操作描述
    tf.string_to_number
    (string_tensor, out_type=None, name=None)
    字符串转为数字
    tf.to_double(x, name=’ToDouble’) 转为64位浮点类型–float64
    tf.to_float(x, name=’ToFloat’) 转为32位浮点类型–float32
    tf.to_int32(x, name=’ToInt32’) 转为32位整型–int32
    tf.to_int64(x, name=’ToInt64’) 转为64位整型–int64
    tf.cast(x, dtype, name=None) 将x或者x.values转换为dtype
    # tensor a is [1.8, 2.2], dtype=tf.float
    tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32

    形状操作Shapes and Shaping

    操作描述
    tf.shape(input, name=None) 返回数据的shape
    # ‘t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
    shape(t) ==> [2, 2, 3]
    tf.size(input, name=None) 返回数据的元素数量
    # ‘t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
    size(t) ==> 12
    tf.rank(input, name=None) 返回tensor的rank
    注意:此rank不同于矩阵的rank,
    tensor的rank表示一个tensor需要的索引数目来唯一表示任何一个元素
    也就是通常所说的 “order”, “degree”或”ndims”
    #’t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
    # shape of tensor ‘t’ is [2, 2, 3]
    rank(t) ==> 3
    tf.reshape(tensor, shape, name=None) 改变tensor的形状
    # tensor ‘t’ is [1, 2, 3, 4, 5, 6, 7, 8, 9]
    # tensor ‘t’ has shape [9]
    reshape(t, [3, 3]) ==>
    [[1, 2, 3],
    [4, 5, 6],
    [7, 8, 9]]
    #如果shape有元素[-1],表示在该维度打平至一维
    # -1 将自动推导得为 9:
    reshape(t, [2, -1]) ==>
    [[1, 1, 1, 2, 2, 2, 3, 3, 3],
    [4, 4, 4, 5, 5, 5, 6, 6, 6]]
    tf.expand_dims(input, dim, name=None) 插入维度1进入一个tensor中
    #该操作要求-1-input.dims()
    # ‘t’ is a tensor of shape [2]
    shape(expand_dims(t, 0)) ==> [1, 2]
    shape(expand_dims(t, 1)) ==> [2, 1]
    shape(expand_dims(t, -1)) ==> [2, 1] <= dim <= input.dims()

    切片与合并(Slicing and Joining)

    操作描述
    tf.slice(input_, begin, size, name=None) 对tensor进行切片操作
    其中size[i] = input.dim_size(i) - begin[i]
    该操作要求 0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]
    #’input’ is
    #[[[1, 1, 1], [2, 2, 2]],[[3, 3, 3], [4, 4, 4]],[[5, 5, 5], [6, 6, 6]]]
    tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]]
    tf.slice(input, [1, 0, 0], [1, 2, 3]) ==>
    [[[3, 3, 3],
    [4, 4, 4]]]
    tf.slice(input, [1, 0, 0], [2, 1, 3]) ==>
    [[[3, 3, 3]],
    [[5, 5, 5]]]
    tf.split(split_dim, num_split, value, name=’split’) 沿着某一维度将tensor分离为num_split tensors
    # ‘value’ is a tensor with shape [5, 30]
    # Split ‘value’ into 3 tensors along dimension 1
    split0, split1, split2 = tf.split(1, 3, value)
    tf.shape(split0) ==> [5, 10]
    tf.concat(concat_dim, values, name=’concat’) 沿着某一维度连结tensor
    t1 = [[1, 2, 3], [4, 5, 6]]
    t2 = [[7, 8, 9], [10, 11, 12]]
    tf.concat(0, [t1, t2]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
    tf.concat(1, [t1, t2]) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
    如果想沿着tensor一新轴连结打包,那么可以:
    tf.concat(axis, [tf.expand_dims(t, axis) for t in tensors])
    等同于tf.pack(tensors, axis=axis)
    tf.pack(values, axis=0, name=’pack’) 将一系列rank-R的tensor打包为一个rank-(R+1)的tensor
    # ‘x’ is [1, 4], ‘y’ is [2, 5], ‘z’ is [3, 6]
    pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]]
    # 沿着第一维pack
    pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]
    等价于tf.pack([x, y, z]) = np.asarray([x, y, z])
    tf.reverse(tensor, dims, name=None) 沿着某维度进行序列反转
    其中dim为列表,元素为bool型,size等于rank(tensor)
    # tensor ‘t’ is
    [[[[ 0, 1, 2, 3],
    #[ 4, 5, 6, 7],

    #[ 8, 9, 10, 11]],
    #[[12, 13, 14, 15],
    #[16, 17, 18, 19],
    #[20, 21, 22, 23]]]]
    # tensor ‘t’ shape is [1, 2, 3, 4]
    # ‘dims’ is [False, False, False, True]
    reverse(t, dims) ==>
    [[[[ 3, 2, 1, 0],
    [ 7, 6, 5, 4],
    [ 11, 10, 9, 8]],
    [[15, 14, 13, 12],
    [19, 18, 17, 16],
    [23, 22, 21, 20]]]]
    tf.transpose(a, perm=None, name=’transpose’) 调换tensor的维度顺序
    按照列表perm的维度排列调换tensor顺序,
    如为定义,则perm为(n-1…0)
    # ‘x’ is [[1 2 3],[4 5 6]]
    tf.transpose(x) ==> [[1 4], [2 5],[3 6]]
    # Equivalently
    tf.transpose(x, perm=[1, 0]) ==> [[1 4],[2 5], [3 6]]
    tf.gather(params, indices, validate_indices=None, name=None) 合并索引indices所指示params中的切片
    tf.gather
    tf.one_hot
    (indices, depth, on_value=None, off_value=None,
    axis=None, dtype=None, name=None)
    indices = [0, 2, -1, 1]
    depth = 3
    on_value = 5.0
    off_value = 0.0
    axis = -1
    #Then output is [4 x 3]:
    output =
    [5.0 0.0 0.0] // one_hot(0)
    [0.0 0.0 5.0] // one_hot(2)
    [0.0 0.0 0.0] // one_hot(-1)
    [0.0 5.0 0.0] // one_hot(1)

    分割(Segmentation)

    操作描述
    tf.segment_sum(data, segment_ids, name=None) 根据segment_ids的分段计算各个片段的和
    其中segment_ids为一个size与data第一维相同的tensor
    其中id为int型数据,最大id不大于size
    c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
    tf.segment_sum(c, tf.constant([0, 0, 1]))
    ==>[[0 0 0 0]
    [5 6 7 8]]
    上面例子分为[0,1]两id,对相同id的data相应数据进行求和,
    并放入结果的相应id中,
    且segment_ids只升不降
    tf.segment_prod(data, segment_ids, name=None) 根据segment_ids的分段计算各个片段的积
    tf.segment_min(data, segment_ids, name=None) 根据segment_ids的分段计算各个片段的最小值
    tf.segment_max(data, segment_ids, name=None) 根据segment_ids的分段计算各个片段的最大值
    tf.segment_mean(data, segment_ids, name=None) 根据segment_ids的分段计算各个片段的平均值
    tf.unsorted_segment_sum(data, segment_ids,
    num_segments, name=None)
    与tf.segment_sum函数类似,
    不同在于segment_ids中id顺序可以是无序的
    tf.sparse_segment_sum(data, indices,
    segment_ids, name=None)
    输入进行稀疏分割求和
    c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
    # Select two rows, one segment.
    tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0]))
    ==> [[0 0 0 0]]
    对原data的indices为[0,1]位置的进行分割,
    并按照segment_ids的分组进行求和
  • 相关阅读:
    配置Express中间件
    C#字符串中特殊字符的转义
    JSON.NET 简单的使用
    ASP.NET 解决URL中文乱码的解决
    ASP.NET MVC 笔记
    VS中一些不常用的快捷键
    Visual Studio 中突出显示的引用
    Silverlight从客户端上传文件到服务器
    silverlight打开和保存文件
    sliverlight资源文件的URI调用
  • 原文地址:https://www.cnblogs.com/hellcat/p/6906130.html
Copyright © 2011-2022 走看看