zoukankan      html  css  js  c++  java
  • FP32转FP16能否加速libtorch调用

    FP32转FP16能否加速libtorch调用

    ###1. PYTORCH 采用FP16后的速度提升问题

    pytorch可以使用half()函数将模型由FP32迅速简洁的转换成FP16.但FP16速度是否提升还依赖于GPU。以下面的代码为例,

    1. import time 
    2.  
    3. import torch 
    4. from torch.autograd import Variable 
    5. import torchvision.models as models 
    6.  
    7. import torch.backends.cudnn as cudnn 
    8. cudnn.benchmark = True 
    9.  
    10. net = models.resnet18().cuda() 
    11. inp = torch.randn(64, 3, 224, 224).cuda() 
    12.  
    13. for i in range(5): 
    14. net.zero_grad() 
    15. out = net.forward(Variable(inp, requires_grad=True)) 
    16. loss = out.sum() 
    17. loss.backward() 
    18.  
    19. torch.cuda.synchronize() 
    20. start=time.time() 
    21. for i in range(100): 
    22. net.zero_grad() 
    23. out = net.forward(Variable(inp, requires_grad=True)) 
    24. loss = out.sum() 
    25. loss.backward() 
    26. torch.cuda.synchronize() 
    27. end=time.time() 
    28.  
    29. print("FP32 Iterations per second: ", 100/(end-start)) 
    30.  
    31. net = models.resnet18().cuda().half() 
    32. inp = torch.randn(64, 3, 224, 224).cuda().half() 
    33.  
    34. torch.cuda.synchronize() 
    35. start=time.time() 
    36. for i in range(100): 
    37. net.zero_grad() 
    38. out = net.forward(Variable(inp, requires_grad=True)) 
    39. loss = out.sum() 
    40. loss.backward() 
    41. torch.cuda.synchronize() 
    42. end=time.time() 
    43.  
    44. print("FP16 Iterations per second: ", 100/(end-start)) 

    在1080Ti上的性能对比:

    1. FP32 Iterations per second: 10.37743206218922 
    2. FP16 Iterations per second: 9.855269155760238 
    3. FP32 Memory:2497M 
    4. FP16 Memory:1611M 

    可以发现FP16显著的降低了显存,但是速度没有提升,反而有些许下降。
    然后观察在 V100 上的性能对比:

    1. FP32 Iterations per second: 16.325794715481173 
    2. FP16 Iterations per second: 24.853492643300903 
    3. FP32 Memory: 3202M 
    4. FP16 Memory: 2272M 

    此时显存显著降低且速度也提升较明显。
    关于pytorch 中采用FP16有时速度没有提升的问题,参考https://discuss.pytorch.org/t/cnn-fp16-slower-than-fp32-on-tesla-p100/12146
    image.png

    ###2. Libtorch采用FP16后的速度提升问题
    我们在V100上测试FP16是否能提升libtorch的推理速度。
    ####2.1 下载libtorch

    1. wget https://download.pytorch.org/libtorch/cu101/libtorch-cxx11-abi-shared-with-deps-1.6.0%2Bcu101.zip 
    2. unzip libtorch-cxx11-abi-shared-with-deps-1.6.0+cu101.zip 

    在pytorch官网找到对应版本的libtorch,libtorch一般会向下支持,我这里的libtorch版本1.6.0, pytorch安装的是1.1.0

    ####2.2 pytorch生成trace.pt

    1. import torch 
    2. import torchvision.models as models 
    3. net = models.resnet18().cuda() 
    4. net.eval() 
    5. inp = torch.randn(64, 3, 224, 224).cuda() 
    6. traced_script_module = torch.jit.trace(net, inp) 
    7. traced_script_module.save("RESNET18_trace.pt") 
    8. print("trace has been saved!") 

    ####2.3 libtorch 调用trace

    1. #include<iostream> 
    2. #include<vector> 
    3. #include<torch/script.h> 
    4. #include <cuda_runtime_api.h> 
    5. using namespace std; 
    6.  
    7. int main() 
    8. { 
    9. at::globalContext().setBenchmarkCuDNN(true); 
    10.  
    11. std::string model_file = "/home/zwzhou/Code/test_libtorch/RESNET18_trace.pt"; 
    12. torch::Tensor inputs = torch::rand({64, 3, 224, 224}).to(at::kCUDA); 
    13. torch::jit::script::Module net = torch::jit::load(model_file); // load model 
    14. net.to(at::kCUDA); 
    15. auto outputs = net.forward({inputs}); 
    16. cudaDeviceSynchronize(); 
    17. auto before = std::chrono::system_clock::now(); 
    18. for (int i=0; i<100; ++i) 
    19. {  
    20. outputs = net.forward({inputs}); 
    21. } 
    22. cudaDeviceSynchronize(); 
    23.  
    24. cudaDeviceSynchronize(); 
    25. auto after = std::chrono::system_clock::now(); 
    26. std::chrono::duration<double> all_time = after - before; 
    27. std::cout<<"FP32 iteration per second: "<<(100/all_time.count())<<" "; 
    28.  
    29. net.to(torch::kHalf); 
    30. cudaDeviceSynchronize(); 
    31. before = std::chrono::system_clock::now(); 
    32. for (int i=0; i<100; ++i) 
    33. { 
    34. outputs = net.forward({inputs.to(torch::kHalf)}); 
    35. } 
    36. cudaDeviceSynchronize(); 
    37. after = std::chrono::system_clock::now(); 
    38. std::chrono::duration<double> all_time2 = after - before; 
    39. std::cout<<"FP16 iteration per second: "<<(100/all_time2.count())<<" "; 
    40.  
    41. } 

    ####2.4 编写CMakeLists.txt

    1. cmake_minimum_required(VERSION 3.0 FATAL_ERROR) 
    2. project(FP_TEST) 
    3.  
    4. set(CMAKE_PREFIX_PATH "/home/zwzhou/packages/libtorch/share/cmake/Torch") 
    5. set(DCMAKE_PREFIX_PATH /home/zwzhou/packages/libtorch) 
    6.  
    7. find_package(Torch REQUIRED) 
    8. add_executable(mtest ./libtorch_test.cpp) 
    9. target_link_libraries(mtest ${TORCH_LIBRARIES}) 
    10. set_property(TARGET mtest PROPERTY CXX_STANDARD 14) 

    ####2.5 测评时间

    1. cd build 
    2. cmake .. 
    3. make 
    4. ./mtest 

    ####2.6 输出时间

    1. FP32 iteration per second: 60.6978 
    2. FP16 iteration per second: 91.5507 

    可以发现,libtorch版本比pytorch版本速度提升比较明显;另外,可以看出在V100上FP16同样能够提升libtorch的推理速度。

    ####2.7 注意事项
    CPU上tensor不支持FP16,所以CUDA上推理完成后转成CPU后还需要转到FP32上。
    https://discuss.pytorch.org/t/runtimeerror-add-cpu-sub-cpu-not-implemented-for-half-when-using-float16-half/66229

  • 相关阅读:
    web单机优化
    html标签
    html基础
    jenkins api
    cobbler api
    Cobbler安装配置简单使用
    ubuntu 12.04下搭建web服务器(MySQL+PHP+Apache) 教程
    在ubuntu12.04上安装6款顶级漂亮的BURG主题
    Setting up an OpenGL development environment in ubuntu
    c++ list 容器
  • 原文地址:https://www.cnblogs.com/YiXiaoZhou/p/13626039.html
Copyright © 2011-2022 走看看