zoukankan      html  css  js  c++  java
  • Pytorch常用创建Tensor方法总结

    1、import from numpy / list

    方法:torch.from_numpy(ndarray)

           常见的初始化有torch.tensor和torch.Tensor

           区别:

                  tensor():通过numpy 或 list 的现有数据初始化

                  Tensor():1、接收数据的维度(,)shape

                                  2、接收现有的数据[,]

    a = np.array([1,2,3])
    data = torch.from_numpy(a)
    print(data)
    """
    输出:
    tensor([1, 2, 3], dtype=torch.int32)
    """
    b = np.ones([2,3])
    data1 = torch.from_numpy(b)
    print(data1)
    """
    输出:
    tensor([[1., 1., 1.],
            [1., 1., 1.]], dtype=torch.float64)
    """
    # 参数为 list
    print(torch.tensor([2.,1.2]))
    """
    输出:
    tensor([2.0000, 1.2000])
    """

    2、未初始化 / 设置默认类型

    方法:

      torch.empty(size)

      torch.FloatTensor(d1,d2,d3)

           torch.InrTensor(d1,d2,d3)

           torch.set_default_tensor_type(torch.DoubleTensor (设置默认类型)

    # 未初始化
    data = torch.empty(1)
    print(data)
    print(torch.Tensor(2,3))
    print(torch.IntTensor(3,4))
    """
    输出:
    tensor([0.])
    tensor([[0.0000e+00, 0.0000e+00, 2.8026e-45],
            [0.0000e+00, 1.4013e-45, 0.0000e+00]])
    tensor([[1718379891, 1698963500, 1701013878, 1986356256],
            [ 744842089, 1633899296, 1416782188,  543518841],
            [1887007844, 1646275685,  543977327, 1601073006]], dtype=torch.int32)
    """
    
    print(torch.tensor([1.,2]).type())
    torch.set_default_tensor_type(torch.DoubleTensor)
    print(torch.tensor([1.,2]).type())
    """
    输出:
    torch.FloatTensor
    torch.DoubleTensor
    """

    3、随机生成

      torch.rand(size):产生[0,1]均匀分布的数据

      torch.rand_like(input, dtype):接收tensor读取shape再用rand生成

      torch.randint(low = 0, high, size):随机生成整数值tensor,范围 [min,max):左闭右开

      torch.randn(size):N(0,1)均值为0,方差为1的正态分布(N(u,std))

      torch.full(size, fill_value):全部赋予相同的值

      torch.normal(means,std,out = None)

        返回一个张量,包含从给定参数means,std的离散正态分布中抽取随机数。 均值means是一个张量,包含每个输出元素相关的正态分布的均值。 std是一个张量,包含每个输出元素相关的正态分布的标准差。 均值和标准差的形状不须匹配,但每个张量的元素个数须相同。

        参数:

          means (Tensor) – 均值

          std (Tensor) – 标准差

          out (Tensor) – 可选的输出张量

    data = torch.rand(3,3)
    print(data)
    """
    输出:
    tensor([[0.0775, 0.2610, 0.0833],
            [0.7911, 0.6999, 0.6589],
            [0.4790, 0.6801, 0.6582]])
    """
    data_like = torch.randn_like(data)
    print(data_like)
    """
    输出:
    tensor([[ 0.6866,  2.5939, -0.2480],
            [-0.9259, -0.3617,  0.5759],
            [-1.0179, -1.0938,  0.6426]])
    """
    print(torch.randint(1,10,[3,3]))
    """
    输出:
    tensor([[7, 3, 2],
            [8, 6, 7],
            [7, 7, 7]])
    """
    
    data = torch.randn(3,3)
    print(data)
    """
    输出:
    tensor([[-0.6225, -0.1253, -0.1083],
            [-0.3199, -0.5670,  0.2898],
            [-0.6500,  0.9275,  1.0377]])
    """
    data = torch.normal(mean=torch.full([10],0),std=torch.arange(1,0,-0.1))
    print(data)
    """
    输出:
    tensor([-0.6509, -1.4877,  0.4740,  1.1891,  0.1009, -0.4449, -0.3422,  0.1519,
            -0.2735,  0.1140])
    """
    print(torch.full([2,4],7))
    print(torch.full([],7)) # 标量
    """
    输出:
    tensor([[7., 7., 7., 7.],
            [7., 7., 7., 7.]])
    tensor(7.)
    """

    4、序列生成

      torch.arange(start, end, step)

        # [start,end) 左闭右开,默认步长为1

      torch.range(start, end, step) (已被arange替代)

        # 包括end,step是两个点间距

      torch.linspace(start, end, steps) # 等差数列

        # 包括end, steps 是点的个数,包括端点, (等距离)

      torch.logspace(start, end, steps) #

    print(torch.arange(0,10))
    print(torch.arange(0,10,2))
    print(torch.linspace(0,10,steps=3))
    print(torch.linspace(0,10,steps=11))
    print(torch.logspace(0,-1,steps=10))
    print(torch.logspace(0,1,steps=10))
    """
    输出:
    tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
    tensor([0, 2, 4, 6, 8])
    tensor([ 0.,  5., 10.])
    tensor([ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.])
    tensor([1.0000, 0.7743, 0.5995, 0.4642, 0.3594, 0.2783, 0.2154, 0.1668, 0.1292,
            0.1000])
    tensor([ 1.0000,  1.2915,  1.6681,  2.1544,  2.7826,  3.5938,  4.6416,  5.9948,
             7.7426, 10.0000])
    """

    5、全零、全一、单位矩阵

      torch.zeros(size)

      torch.zeros_like(input, dtype)

      torch.ones(size)

      torch.ones_like(input, dtype)

      torch.eye(size)

    print(torch.ones(3,3))
    print(torch.zeros(2,3))
    print(torch.eye(4,4))
    print(torch.eye(2))
    """
    输出:
    tensor([[1., 1., 1.],
            [1., 1., 1.],
            [1., 1., 1.]])
    tensor([[0., 0., 0.],
            [0., 0., 0.]])
    tensor([[1., 0., 0., 0.],
            [0., 1., 0., 0.],
            [0., 0., 1., 0.],
            [0., 0., 0., 1.]])
    tensor([[1., 0.],
            [0., 1.]])
    """

    6、torch.randperm(n) # 生成一个0到n-1的n-1个整数的随机排列

    print(torch.randperm(10))
    """
    输出:
    tensor([0, 1, 4, 7, 9, 8, 6, 3, 2, 5])
    """
    

      

  • 相关阅读:
    jQuery事件委托
    jQuery-事件面试题
    jQuery事件处理
    文档—CUD
    jQuery练习
    jQuery-筛选
    5. Longest Palindromic Substring
    340. Longest Substring with At Most K Distinct Characters
    159. Longest Substring with At Most Two Distinct Characters
    438. Find All Anagrams in a String
  • 原文地址:https://www.cnblogs.com/IamJiangXiaoKun/p/12073312.html
Copyright © 2011-2022 走看看