zoukankan      html  css  js  c++  java
  • NVIDIA-docker Cheatsheet

    TensorFlow Docker requirements

    1. Install Docker on your local host machine.
    2. For GPU support on Linux, install nvidia-docker.

    Note: To run the docker command without sudo, create the docker group and add your user. For details, see the post-installation steps for Linux.

    Download a TensorFlow Docker image

    The official TensorFlow Docker images are located in the tensorflow/tensorflow Docker Hub repository. Image releases are tagged using the following format:

    TagDescription
    latest The latest release of TensorFlow CPU binary image. Default.
    nightly Nightly builds of the TensorFlow image. (unstable)
    version Specify the version of the TensorFlow binary image, for example: 1.14.0
    devel Nightly builds of a TensorFlow master development environment. Includes TensorFlow source code.

    Each base tag has variants that add or change functionality:

    Tag VariantsDescription
    tag-gpu The specified tag release with GPU support. (See below)
    tag-py3 The specified tag release with Python 3 support.
    tag-jupyter The specified tag release with Jupyter (includes TensorFlow tutorial notebooks)

    You can use multiple variants at once. For example, the following downloads TensorFlow release images to your machine:

    docker pull tensorflow/tensorflow                     # latest stable release
    docker pull tensorflow/tensorflow:devel-gpu           # nightly dev release w/ GPU support
    docker pull tensorflow/tensorflow:latest-gpu-jupyter  # latest release w/ GPU support and Jupyter
     

    Start a TensorFlow Docker container

    To start a TensorFlow-configured container, use the following command form:

    docker run [-it] [--rm] [-p hostPort:containerPort] tensorflow/tensorflow[:tag] [command]
    
     

    For details, see the docker run reference.

    Examples using CPU-only images

    Let's verify the TensorFlow installation using the latest tagged image. Docker downloads a new TensorFlow image the first time it is run:

    docker run -it --rm tensorflow/tensorflow 
       python -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))"
     

    Success: TensorFlow is now installed. Read the tutorials to get started.

    Let's demonstrate some more TensorFlow Docker recipes. Start a bash shell session within a TensorFlow-configured container:

    docker run -it tensorflow/tensorflow bash
    
     

    Within the container, you can start a python session and import TensorFlow.

    To run a TensorFlow program developed on the host machine within a container, mount the host directory and change the container's working directory (-v hostDir:containerDir -w workDir):

    docker run -it --rm -v $PWD:/tmp -w /tmp tensorflow/tensorflow python ./script.py
     

    Permission issues can arise when files created within a container are exposed to the host. It's usually best to edit files on the host system.

    Start a Jupyter Notebook server using TensorFlow's nightly build with Python 3 support:

    docker run -it -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
    
     

    Follow the instructions and open the URL in your host web browser: http://127.0.0.1:8888/?token=...

    GPU support

    Docker is the easiest way to run TensorFlow on a GPU since the host machine only requires the NVIDIA® driver (the NVIDIA® CUDA® Toolkit is not required).

    Install nvidia-docker to launch a Docker container with NVIDIA® GPU support. nvidia-docker is only available for Linux, see their platform support FAQ for details.

    Check if a GPU is available:

    lspci | grep -i nvidia
    
     

    Verify your nvidia-docker installation:

    docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
    
     

    Note: nvidia-docker v1 uses the nvidia-docker alias, where v2 uses docker --runtime=nvidia.

    Examples using GPU-enabled images

    Download and run a GPU-enabled TensorFlow image (may take a few minutes):

    docker run --runtime=nvidia -it --rm tensorflow/tensorflow:latest-gpu 
       python -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))"
     

    It can take a while to set up the GPU-enabled image. If repeatably running GPU-based scripts, you can use docker execto reuse a container.

    Use the latest TensorFlow GPU image to start a bash shell session in the container:

    docker run --runtime=nvidia -it tensorflow/tensorflow:latest-gpu bash
    
     
  • 相关阅读:
    C# 把一个文件夹下所有文件复制到另一个文件夹下 把一个文件夹下所有文件删除(转)
    【总结整理】webGIS学习thinkGIS(四)WebGIS中通过行列号来换算出多种瓦片的URL 之离线地
    ARCGIS空间叠加分析(转)
    ARCGIS中怎么去除重复的面?(转)
    关于写作赚钱(转)
    【总结整理】WebGIS学习-thinkGIS(三):关于影像金字塔、瓦片行列号、分辨率resolution
    【总结整理】WebGIS学习-thinkGIS(地理常识):
    【总结整理】WebGIS学习-thinkGIS(二):关于level,比例尺scale,分辨率resolution
    【总结整理】AMAP学习AMAP.PlaceSearch()
    logging、hashlib、collections模块
  • 原文地址:https://www.cnblogs.com/sddai/p/11116349.html
Copyright © 2011-2022 走看看