zoukankan      html  css  js  c++  java
  • Neural Style学习2——环境安装

    neural-style Installation

    This guide will walk you through the setup for neural-style on Ubuntu.

    Step 1: Install torch7

    First we need to install torch, following the installation instructions
    here:

    # in a terminal, run the commands
    cd ~/
    curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash
    git clone https://github.com/torch/distro.git ~/torch --recursive
    cd ~/torch; ./install.sh
    

    The first script installs all dependencies for torch and may take a while.
    The second script actually installs lua and torch.
    The second script also edits your .bashrc file so that torch is added to your PATH variable;
    we need to source it to refresh our environment variables:

    source ~/.bashrc
    

    To check that your torch installation is working, run the command th to enter the interactive shell.
    To quit just type exit.

    Step 2: Install loadcaffe

    loadcaffe depends on Google's Protocol Buffer library
    so we'll need to install that first:

    sudo apt-get install libprotobuf-dev protobuf-compiler
    

    Now we can instal loadcaffe:

    luarocks install loadcaffe
    

    Step 3: Install neural-style

    First we clone neural-style from GitHub:

    cd ~/
    git clone https://github.com/jcjohnson/neural-style.git
    cd neural-style
    

    Next we need to download the pretrained neural network models:

    sh models/download_models.sh
    

    You should now be able to run neural-style in CPU mode like this:

    th neural_style.lua -gpu -1 -print_iter 1
    

    If everything is working properly you should see output like this:

    [libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
    [libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
    Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
    conv1_1: 64 3 3 3
    conv1_2: 64 64 3 3
    conv2_1: 128 64 3 3
    conv2_2: 128 128 3 3
    conv3_1: 256 128 3 3
    conv3_2: 256 256 3 3
    conv3_3: 256 256 3 3
    conv3_4: 256 256 3 3
    conv4_1: 512 256 3 3
    conv4_2: 512 512 3 3
    conv4_3: 512 512 3 3
    conv4_4: 512 512 3 3
    conv5_1: 512 512 3 3
    conv5_2: 512 512 3 3
    conv5_3: 512 512 3 3
    conv5_4: 512 512 3 3
    fc6: 1 1 25088 4096
    fc7: 1 1 4096 4096
    fc8: 1 1 4096 1000
    WARNING: Skipping content loss	
    Iteration 1 / 1000	
      Content 1 loss: 2091178.593750	
      Style 1 loss: 30021.292114	
      Style 2 loss: 700349.560547	
      Style 3 loss: 153033.203125	
      Style 4 loss: 12404635.156250	
      Style 5 loss: 656.860304	
      Total loss: 15379874.666090	
    Iteration 2 / 1000	
      Content 1 loss: 2091177.343750	
      Style 1 loss: 30021.292114	
      Style 2 loss: 700349.560547	
      Style 3 loss: 153033.203125	
      Style 4 loss: 12404633.593750	
      Style 5 loss: 656.860304	
      Total loss: 15379871.853590	
    

    (Optional) Step 4: Install CUDA

    If you have a CUDA-capable GPU from NVIDIA then you can
    speed up neural-style with CUDA.

    First download and unpack the local CUDA installer from NVIDIA; note that there are different
    installers for each recent version of Ubuntu:

    # For Ubuntu 14.10
    wget http://developer.download.nvidia.com/compute/cuda/7_0/Prod/local_installers/rpmdeb/cuda-repo-ubuntu1410-7-0-local_7.0-28_amd64.deb
    sudo dpkg -i cuda-repo-ubuntu1410-7-0-local_7.0-28_amd64.deb
    
    # For Ubuntu 14.04
    wget http://developer.download.nvidia.com/compute/cuda/7_0/Prod/local_installers/rpmdeb/cuda-repo-ubuntu1404-7-0-local_7.0-28_amd64.deb
    sudo dpkg -i cuda-repo-ubuntu1404-7-0-local_7.0-28_amd64.deb
    
    # For Ubuntu 12.04
    http://developer.download.nvidia.com/compute/cuda/7_0/Prod/local_installers/rpmdeb/cuda-repo-ubuntu1204-7-0-local_7.0-28_amd64.deb
    sudo dpkg -i cuda-repo-ubuntu1204-7-0-local_7.0-28_amd64.deb
    

    Now update the repository cache and install CUDA. Note that this will also install a graphics driver from NVIDIA.

    sudo apt-get update
    sudo apt-get install cuda
    

    At this point you may need to reboot your machine to load the new graphics driver.
    After rebooting, you should be able to see the status of your graphics card(s) by running
    the command nvidia-smi; it should give output that looks something like this:

    Sun Sep  6 14:02:59 2015       
    +------------------------------------------------------+                       
    | NVIDIA-SMI 346.96     Driver Version: 346.96         |                       
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  GeForce GTX TIT...  Off  | 0000:01:00.0      On |                  N/A |
    | 22%   49C    P8    18W / 250W |   1091MiB / 12287MiB |      3%      Default |
    +-------------------------------+----------------------+----------------------+
    |   1  GeForce GTX TIT...  Off  | 0000:04:00.0     Off |                  N/A |
    | 29%   44C    P8    27W / 189W |     15MiB /  6143MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
    |   2  GeForce GTX TIT...  Off  | 0000:05:00.0     Off |                  N/A |
    | 30%   45C    P8    33W / 189W |     15MiB /  6143MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID  Type  Process name                               Usage      |
    |=============================================================================|
    |    0      1277    G   /usr/bin/X                                     631MiB |
    |    0      2290    G   compiz                                         256MiB |
    |    0      2489    G   ...s-passed-by-fd --v8-snapshot-passed-by-fd   174MiB |
    +-----------------------------------------------------------------------------+
    

    (Optional) Step 5: Install CUDA backend for torch

    This is easy:

    luarocks install cutorch
    luarocks install cunn
    

    You can check that the installation worked by running the following:

    th -e "require 'cutorch'; require 'cunn'; print(cutorch)"
    

    This should produce output like the this:

    {
      getStream : function: 0x40d40ce8
      getDeviceCount : function: 0x40d413d8
      setHeapTracking : function: 0x40d41a78
      setRNGState : function: 0x40d41a00
      getBlasHandle : function: 0x40d40ae0
      reserveBlasHandles : function: 0x40d40980
      setDefaultStream : function: 0x40d40f08
      getMemoryUsage : function: 0x40d41480
      getNumStreams : function: 0x40d40c48
      manualSeed : function: 0x40d41960
      synchronize : function: 0x40d40ee0
      reserveStreams : function: 0x40d40bf8
      getDevice : function: 0x40d415b8
      seed : function: 0x40d414d0
      deviceReset : function: 0x40d41608
      streamWaitFor : function: 0x40d40a00
      withDevice : function: 0x40d41630
      initialSeed : function: 0x40d41938
      CudaHostAllocator : torch.Allocator
      test : function: 0x40ce5368
      getState : function: 0x40d41a50
      streamBarrier : function: 0x40d40b58
      setStream : function: 0x40d40c98
      streamBarrierMultiDevice : function: 0x40d41538
      streamWaitForMultiDevice : function: 0x40d40b08
      createCudaHostTensor : function: 0x40d41670
      setBlasHandle : function: 0x40d40a90
      streamSynchronize : function: 0x40d41590
      seedAll : function: 0x40d414f8
      setDevice : function: 0x40d414a8
      getNumBlasHandles : function: 0x40d409d8
      getDeviceProperties : function: 0x40d41430
      getRNGState : function: 0x40d419d8
      manualSeedAll : function: 0x40d419b0
      _state : userdata: 0x022fe750
    }
    

    You should now be able to run neural-style in GPU mode:

    th neural_style.lua -gpu 0 -print_iter 1
    

    (Optional) Step 6: Install cuDNN

    cuDNN is a library from NVIDIA that efficiently implements many of the operations (like convolutions and pooling)
    that are commonly used in deep learning.

    After registering as a developer with NVIDIA, you can download cuDNN here.
    Make sure to download Version 4.

    After dowloading, you can unpack and install cuDNN like this:

    tar -xzvf cudnn-7.0-linux-x64-v4.0-prod.tgz
    sudo cp cuda/lib64/libcudnn* /usr/local/cuda-7.0/lib64/
    sudo cp cuda/include/cudnn.h /usr/local/cuda-7.0/include/
    

    Next we need to install the torch bindings for cuDNN:

    luarocks install cudnn
    

    You should now be able to run neural-style with cuDNN like this:

    th neural_style.lua -gpu 0 -backend cudnn
    

    Note that the cuDNN backend can only be used for GPU mode.

    注意:在安装过程中会遇到以下几个坑!!!
    坑1:torch的依赖库很多!!

    curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash
    运行这个时,一定会经过较长时间的安装!!!!由于我这里的网很差,所以如果你的也有类似的情况,那么可能会出现:“xxx 校验和不符”。这时说明完全没有安装依赖库好吧!!我以前以为已经装好了,直接下完neural-style,然后./install.sh。我擦,结果出现什么cmake not found之类的。然后我还傻乎乎的去 sudo apt-get install cmake。结果又出现其他乱七八糟的,现在就是一句话:curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash是把所有的依赖库都会安装好!!并且安装完之后会有类似提示:“torch dependencies have already installed.”

    坑2:安装cuda,版本不符

    wget http://developer.download.nvidia.com/compute/cuda/7_0/Prod/local_installers/rpmdeb/cuda-repo-ubuntu1404-7-0-local_7.0-28_amd64.deb

    sudo dpkg -i cuda-repo-ubuntu1204-7-0-local_7.0-28_amd64.deb
    这个理论上貌似没错,但是后面

    sudo apt-get update
    sudo apt-get install cuda
    这里用的apt-get得到的是cuda7.5的!!和上面安装的版本不符合!所以正确方式:
    自己去官网下载,然后安装!
    根据我的ubuntu版本,因此我选择
    cuda-repo-ubuntu1404-7-5-local_7.5-18_amd64.deb

    以后如果apt-get得到的cuda版本更高了,就一定要安装对应的版本即可!

    坑3 cudnn和cuda版本一定要对应!

    原文

    tar -xzvf cudnn-7.0-linux-x64-v4.0-prod.tgz
    sudo cp cuda/lib64/libcudnn* /usr/local/cuda-7.0/lib64/
    sudo cp cuda/include/cudnn.h /usr/local/cuda-7.0/include/
    luarocks install cudnn
    此处自己去下载for cuda7.5的,此处即为cudnn-7.5-linux-x64-v5.0-ga.tgz
    当然了,把

    sudo cp cuda/include/cudnn.h /usr/local/cuda-7.0/include/
    的7.0改成7.5。

    坑4 可能出现’libcudnn not found in library path’的情况

    截取其中一段错误信息:

    Please install CuDNN from https://developer.nvidia.com/cuDNN
    Then make sure files named as libcudnn.so.5 or libcudnn.5.dylib are placed in your library load path (for example /usr/local/lib , or manually add a path to LD_LIBRARY_PATH)
    LD_LIBRARY_PATH是该环境变量,主要用于指定查找共享库(动态链接库)时除了默认路径之外的其他路径。由于刚才已经将
    “libcudnn*”复制到了/usr/local/cuda-7.5/lib64/下面,因此需要

    sudo gedit /etc/ld.so.conf.d/cudnn.conf 就是新建一个conf文件。名字随便
    加入刚才的路径/usr/local/cuda-7.5/lib64/
    反正我还添加了/usr/local/cuda-7.5/include/,这个估计不要也行。
    保存后,再sudo ldconfig来更新缓存。(可能会出现libcudnn.so.5不是符号连接的问题,不过无所谓了!!)
    此时运行

    th neural_style.lua -gpu 0 -backend cudnn
    成功了!!!!

    发现用cudnn时,变成50个50个一显示了,速度快了些。刚才但存用cuda只是1个1个显示的。不说了,歇会儿。

    总结

    一定要版本对应!!以后apt-get会得到更高的版本的cuda和cudnn,这时候一定要根据实际情况下载对应版本的进行安装。方法类似。

  • 相关阅读:
    zookeeper基础
    4. Zookeeper单机版安装
    3. 服务中间件Dubbo
    2. Maven工程的搭建
    Maven的优点
    应用架构的演进历史 MVC、 RPC、SOA 和 微服务架构
    js 正则表达式验证
    后台正则表达式验证部分部分总结
    Linux下记录登录用户历史操作
    Django使用Ace实现在线编辑器
  • 原文地址:https://www.cnblogs.com/DarrenChan/p/6211528.html
Copyright © 2011-2022 走看看