zoukankan      html  css  js  c++  java
  • GPU资源整理

    1. 百度AIstudio
      • 免费
      • 框架限制为PaddlePaddle
      • 公共项目很多,很适合学习
    2. 腾讯钛
    3. OPENBAYES
      • https://openbayes.com
      • 需要邀请码才能注册,邀请码通过申请获得
      • 新用户有送一些可用时长,之后就要付费购买资源了
      • 环境配置得不错,各种框架都支持
    4. colab
    5. paperspace
    6. www.52lm.xyz
      • 基于区块链的GPU租用平台
      • 几毛钱一小时,便宜又好用
      • 不错不错
    7. kaggle kernal
      • 这个大家应该都很熟悉了,不多说了

    附:

    PytorchGPU测试代码

    """
    Optional: Data Parallelism
    ==========================
    **Authors**: `Sung Kim <https://github.com/hunkim>`_ and `Jenny Kang <https://github.com/jennykang>`_
    
    In this tutorial, we will learn how to use multiple GPUs using ``DataParallel``.
    
    It's very easy to use GPUs with PyTorch. You can put the model on a GPU:
    
    .. code:: python
    
        model.gpu()
    
    Then, you can copy all your tensors to the GPU:
    
    .. code:: python
    
        mytensor = my_tensor.gpu()
    
    Please note that just calling ``mytensor.gpu()`` won't copy the tensor
    to the GPU. You need to assign it to a new tensor and use that tensor on the GPU.
    
    It's natural to execute your forward, backward propagations on multiple GPUs. 
    However, Pytorch will only use one GPU by default. You can easily run your 
    operations on multiple GPUs by making your model run parallelly using 
    ``DataParallel``:
    
    .. code:: python
    
        model = nn.DataParallel(model)
    
    That's the core behind this tutorial. We will explore it in more detail below.
    """
    
    
    ######################################################################
    # Imports and parameters
    # ----------------------
    # 
    # Import PyTorch modules and define parameters.
    # 
    
    import torch
    import torch.nn as nn
    from torch.autograd import Variable
    from torch.utils.data import Dataset, DataLoader
    
    # Parameters and DataLoaders
    input_size = 5
    output_size = 2
    
    batch_size = 30
    data_size = 100
    
    
    ######################################################################
    # Dummy DataSet
    # -------------
    # 
    # Make a dummy (random) dataset. You just need to implement the
    # getitem 
    #
    
    class RandomDataset(Dataset):
    
        def __init__(self, size, length):
            self.len = length
            self.data = torch.randn(length, size)
    
        def __getitem__(self, index):
            return self.data[index]
    
        def __len__(self):
            return self.len
    
    rand_loader = DataLoader(dataset=RandomDataset(input_size, 100),
                             batch_size=batch_size, shuffle=True)
    
    
    ######################################################################
    # Simple Model
    # ------------
    # 
    # For the demo, our model just gets an input, performs a linear operation, and 
    # gives an output. However, you can use ``DataParallel`` on any model (CNN, RNN,
    # Capsule Net etc.) 
    #
    # We've placed a print statement inside the model to monitor the size of input
    # and output tensors. 
    # Please pay attention to what is printed at batch rank 0.
    # 
    
    class Model(nn.Module):
        # Our model
    
        def __init__(self, input_size, output_size):
            super(Model, self).__init__()
            self.fc = nn.Linear(input_size, output_size)
    
        def forward(self, input):
            output = self.fc(input)
            print("  In Model: input size", input.size(), 
                  "output size", output.size())
    
            return output
    
    
    ######################################################################
    # Create Model and DataParallel
    # -----------------------------
    # 
    # This is the core part of the tutorial. First, we need to make a model instance
    # and check if we have multiple GPUs. If we have multiple GPUs, we can wrap 
    # our model using ``nn.DataParallel``. Then we can put our model on GPUs by
    # ``model.gpu()`` 
    # 
    
    model = Model(input_size, output_size)
    if torch.cuda.device_count() >= 1:
      print("Let's use", torch.cuda.device_count(), "GPUs!")
      # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
      model = nn.DataParallel(model)
    
    if torch.cuda.is_available():
       print("cuda cuda cuda")
       model.cuda()
    
    
    ######################################################################
    # Run the Model
    # -------------
    # 
    # Now we can see the sizes of input and output tensors.
    # 
    
    for data in rand_loader:
        if torch.cuda.is_available():
            input_var = Variable(data.cuda())
        else:
            input_var = Variable(data)
    
        output = model(input_var)
        print("Outside: input size", input_var.size(),
              "output_size", output.size())
    
    
    ######################################################################
    # Results
    # -------
    # 
    # When we batch 30 inputs and 30 outputs, the model gets 30 and outputs 30 as
    # expected. But if you have GPUs, then you can get results like this.
    # 
    # 2 GPUs
    # ~~~~~~
    #
    # If you have 2, you will see:
    # 
    # .. code:: bash
    # 
    #     # on 2 GPUs
    #     Let's use 2 GPUs!
    #         In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
    #         In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
    #     Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    #         In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
    #         In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
    #     Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    #         In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
    #         In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
    #     Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    #         In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
    #         In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
    #     Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])
    # 
    # 3 GPUs
    # ~~~~~~
    # 
    # If you have 3 GPUs, you will see:
    # 
    # .. code:: bash
    # 
    #     Let's use 3 GPUs!
    #         In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
    #         In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
    #         In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
    #     Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    #         In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
    #         In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
    #         In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
    #     Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    #         In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
    #         In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
    #         In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
    #     Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
    #     Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])
    # 
    # 8 GPUs
    # ~~~~~~~~~~~~~~
    # 
    # If you have 8, you will see:
    # 
    # .. code:: bash
    # 
    #     Let's use 8 GPUs!
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #     Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #     Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
    #         In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
    #     Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
    #         In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
    #         In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
    #         In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
    #         In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
    #         In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
    #     Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])
    # 
    
    
    ######################################################################
    # Summary
    # -------
    # 
    # DataParallel splits your data automatically and sends job orders to multiple
    # models on several GPUs. After each model finishes their job, DataParallel
    # collects and merges the results before returning it to you.
    # 
    # For more information, please check out
    # http://pytorch.org/tutorials/beginner/former\_torchies/parallelism\_tutorial.html.
    # 
    
  • 相关阅读:
    python
    shader example
    shader 关键字
    Android Studio如何导出可供Unity使用的aar插件详解 转
    adb
    知识
    Unity实现模拟按键
    小知识
    图种制作命令
    八卦
  • 原文地址:https://www.cnblogs.com/lokvahkoor/p/12118122.html
Copyright © 2011-2022 走看看