zoukankan      html  css  js  c++  java
  • Docker3之Swarm

    • Make sure you have published the friendlyhello image you created by pushing it to a registry. We’ll be using that shared image here.

    • Be sure your image works as a deployed container. Run this command, slotting in your info for usernamerepo, and tagdocker run -p 80:80 username/repo:tag, then visit http://localhost/.

    • Have a copy of your docker-compose.yml from Part 3 handy.

    deploy this application onto a cluster, running it on multiple machines. Multi-container, multi-machine applications are made possible by joining multiple machines into a “Dockerized” cluster called a swarm.

    Understanding Swarm clusters

    A swarm is a group of machines that are running Docker and joined into a cluster

     After that has happened, you continue to run the Docker commands you’re used to, but now they are executed on a cluster by a swarm manager

    The machines in a swarm can be physical or virtual.

    After joining a swarm, they are referred to as nodes.

    Swarm managers can use several strategies to run containers

    such as “emptiest node” – which fills the least utilized machines with containers.

    Or “global”, which ensures that each machine gets exactly one instance of the specified container. You instruct the swarm manager to use these strategies in the Compose file, just like the one you have already been using.

    Swarm managers are the only machines in a swarm that can execute your commands, or authorize other machines to join the swarm as workers.

    Workers are just there to provide capacity and do not have the authority to tell any other machine what it can and cannot do.

    Up until now, you have been using Docker in a single-host mode on your local machine.

    But Docker also can be switched into swarm mode, and that’s what enables the use of swarms.

    Enabling swarm mode instantly makes the current machine a swarm manager. From then on, Docker will run the commands you execute on the swarm you’re managing, rather than just on the current machine.

    Set up your swarm

    A swarm is made up of multiple nodes, which can be either physical or virtual machines.

    The basic concept is simple enough:

      run docker swarm init to enable swarm mode and make your current machine a swarm manager,

       then run docker swarm join on other machines to have them join the swarm as workers.

    Choose a tab below to see how this plays out in various contexts. We’ll use VMs to quickly create a two-machine cluster and turn it into a swarm.

    Create a cluster

      VMS ON YOUR LOCAL MACHINE (MAC, LINUX, WINDOWS 7 AND 8)

        First, you’ll need a hypervisor that can create virtual machines (VMs), so install Oracle VirtualBox for your machine’s OS.

           If you are on a Windows system that has Hyper-V installed, such as Windows 10, there is no need to install VirtualBox and you should use Hyper-V instead. View the instructions for Hyper-V systems by clicking the Hyper-V tab above.

          If you are using Docker Toolbox, you should already have VirtualBox installed as part of it, so you are good to go.

        Now, create a couple of VMs using docker-machine, using the VirtualBox driver:   

    docker-machine create --driver virtualbox myvm1
    docker-machine create --driver virtualbox myvm2
    

      LIST THE VMS AND GET THEIR IP ADDRESSES

        You now have two VMs created, named myvm1 and myvm2.Use this command to list the machines and get their IP addresses. 

    $ docker-machine ls
    NAME    ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
    myvm1   -        virtualbox   Running   tcp://192.168.99.100:2376           v17.06.2-ce   
    myvm2   -        virtualbox   Running   tcp://192.168.99.101:2376           v17.06.2-ce   
    

      INITIALIZE THE SWARM AND ADD NODES

        The first machine will act as the manager, which executes management commands and authenticates workers to join the swarm, and the second will be a worker.

        You can send commands to your VMs using docker-machine ssh. Instruct myvm1 to become a swarm manager with docker swarm init and you’ll see output like this:

    $ docker-machine ssh myvm1 "docker swarm init --advertise-addr <myvm1 ip>"
    Swarm initialized: current node <node ID> is now a manager.
    
    To add a worker to this swarm, run the following command:
    
      docker swarm join 
      --token <token> 
      <myvm ip>:<port>
    
    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
    

        the response to docker swarm init contains a pre-configured docker swarm join command for you to run on any nodes you want to add. Copy this command, and send it to myvm2via docker-machine ssh to have myvm2 join your new swarm as a worker:

    $ docker-machine ssh myvm2 "docker swarm join 
    --token <token> 
    <ip>:2377"
    
    This node joined a swarm as a worker.
    

        Run docker node ls on the manager to view the nodes in this swarm:

    $ docker-machine ssh myvm1 "docker node ls"
    ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
    brtu9urxwfd5j0zrmkubhpkbd     myvm2               Ready               Active
    rihwohkh3ph38fhillhhb84sk *   myvm1               Ready  
    

      

      Ports 2377 and 2376

        Always run docker swarm init and docker swarm joinwith port 2377 (the swarm management port), or no port at all and let it take the default.

        The machine IP addresses returned by docker-machine lsinclude port 2376, which is the Docker daemon port. Do not use this port or you may experience errors.

      Having trouble using SSH? Try the –native-ssh flag

        Docker Machine has the option to let you use your own system’s SSH, if for some reason you’re having trouble sending commands to your Swarm manager. Just specify the--native-ssh flag when invoking the ssh command:

    docker-machine --native-ssh ssh myvm1 ...
    

      Leaving a swarm

        If you want to start over, you can run docker swarm leavefrom each node.

    Deploy your app on the swarm cluster

     just repeat the process you used in part 3 to deploy on your new swarm. Just remember that only swarm managers like myvm1 execute Docker commands; workers are just for capacity.

    Configure a docker-machine shell to the swarm manager

     wrapping Docker commands in docker-machine ssh to talk to the VMs.

    Another option is to run docker-machine env <machine> to get and run a command that configures your current shell to talk to the Docker daemon on the VM. 

    This method works better for the next step because it allows you to use your local docker-compose.yml file to deploy the app “remotely” without having to copy it anywhere.

    Type docker-machine env myvm1,

    then copy-paste and run the command provided as the last line of the output to configure your shell to talk to myvm1, the swarm manager.

    The commands to configure your shell differ depending on whether you are Mac, Linux, or Windows

    DOCKER MACHINE SHELL ENVIRONMENT ON MAC OR LINUX

    Run docker-machine env myvm1 to get the command to configure your shell to talk to myvm1.

    $ docker-machine env myvm1
    export DOCKER_TLS_VERIFY="1"
    export DOCKER_HOST="tcp://192.168.99.100:2376"
    export DOCKER_CERT_PATH="/Users/sam/.docker/machine/machines/myvm1"
    export DOCKER_MACHINE_NAME="myvm1"
    # Run this command to configure your shell:
    # eval $(docker-machine env myvm1)
    

    Run the given command to configure your shell to talk to myvm1.

    eval $(docker-machine env myvm1)
    

    Run docker-machine ls to verify that myvm1 is now the active machine, as indicated by the asterisk next to it.

    $ docker-machine ls
    NAME    ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
    myvm1   *        virtualbox   Running   tcp://192.168.99.100:2376           v17.06.2-ce   
    myvm2   -        virtualbox   Running   tcp://192.168.99.101:2376           v17.06.2-ce   
    

    DOCKER MACHINE SHELL ENVIRONMENT ON WINDOWS

    Run docker-machine env myvm1 to get the command to configure your shell to talk to myvm1.

    PS C:Userssamsandboxget-started> docker-machine env myvm1
    $Env:DOCKER_TLS_VERIFY = "1"
    $Env:DOCKER_HOST = "tcp://192.168.203.207:2376"
    $Env:DOCKER_CERT_PATH = "C:Userssam.dockermachinemachinesmyvm1"
    $Env:DOCKER_MACHINE_NAME = "myvm1"
    $Env:COMPOSE_CONVERT_WINDOWS_PATHS = "true"
    # Run this command to configure your shell:
    # & "C:Program FilesDockerDockerResourcesindocker-machine.exe" env myvm1 | Invoke-Expression
    

    Run the given command to configure your shell to talk to myvm1.

    & "C:Program FilesDockerDockerResourcesindocker-machine.exe" env myvm1 | Invoke-Expression
    

    Run docker-machine ls to verify that myvm1 is the active machine as indicated by the asterisk next to it.

    PS C:PATH> docker-machine ls
    NAME    ACTIVE   DRIVER   STATE     URL                          SWARM   DOCKER        ERRORS
    myvm1   *        hyperv   Running   tcp://192.168.203.207:2376           v17.06.2-ce
    myvm2   -        hyperv   Running   tcp://192.168.200.181:2376           v17.06.2-ce
    

    Deploy the app on the swarm manager

    Now that you have my myvm1, you can use its powers as a swarm manager to deploy your app by using the same docker stack deploy command you used in part 3 to myvm1, and your local copy of docker-compose.yml.

    You are connected to myvm1 by means of the docker-machineshell configuration, and you still have access to the files on your local host. Make sure you are in the same directory as before, which includes the docker-compose.yml file you created in part 3.

    the app is deployed on a swarm cluster!

    docker stack deploy -c docker-compose.yml getstartedlab
    

    use the same docker commands you used in part 3. Only this time you’ll see that the services (and associated containers) have been distributed between both myvm1 and myvm2.

    $ docker stack ps getstartedlab
    
    ID            NAME                  IMAGE                   NODE   DESIRED STATE
    jq2g3qp8nzwx  getstartedlab_web.1   john/get-started:part2  myvm1  Running
    88wgshobzoxl  getstartedlab_web.2   john/get-started:part2  myvm2  Running
    vbb1qbkb0o2z  getstartedlab_web.3   john/get-started:part2  myvm2  Running
    ghii74p9budx  getstartedlab_web.4   john/get-started:part2  myvm1  Running
    0prmarhavs87  getstartedlab_web.5   john/get-started:part2  myvm2  Running
    

    Connecting to VMs with docker-machine env and docker-machine ssh

    • To set your shell to talk to a different machine like myvm2, simply re-run docker-machine env in the same or a different shell, then run the given command to point to myvm2. This is always specific to the current shell. If you change to an unconfigured shell or open a new one, you need to re-run the commands. Use docker-machine ls to list machines, see what state they are in, get IP addresses, and find out which one, if any, you are connected to. To learn more, see the Docker Machine getting started topics.

    • Alternatively, you can wrap Docker commands in the form of docker-machine ssh <machine> "<command>", which logs directly into the VM but doesn’t give you immediate access to files on your local host.

    • On Mac and Linux, you can use docker-machine scp <file> <machine>:~ to copy files across machines, but Windows users need a Linux terminal emulator like Git Bash in order for this to work.

    • This tutorial demos both docker-machine ssh anddocker-machine env, since these are available on all platforms via the docker-machine CLI.

    Accessing your cluster

    You can access your app from the IP address of either myvm1 or myvm2.

    The network you created is shared between them and load-balancing. Run docker-machine ls to get your VMs’ IP addresses and visit either of them on a browser, hitting refresh (or just curlthem).

    You’ll see five possible container IDs all cycling by randomly, demonstrating the load-balancing.

    The reason both IP addresses work is that nodes in a swarm participate in an ingress routing mesh. This ensures that a service deployed at a certain port within your swarm always has that port reserved to itself, no matter what node is actually running the container. Here’s a diagram of how a routing mesh for a service called my-web published at port 8080 on a three-node swarm would look:

    Having connectivity trouble?

    Keep in mind that in order to use the ingress network in the swarm, you need to have the following ports open between the swarm nodes before you enable swarm mode:

    • Port 7946 TCP/UDP for container network discovery.
    • Port 4789 UDP for the container ingress network.

    Iterating and scaling your app

    Scale the app by changing the docker-compose.yml file.

    Change the app behavior by editing code, then rebuild, and push the new image. (To do this, follow the same steps you took earlier to build the app and publish the image).

    In either case, simply run docker stack deploy again to deploy these changes.

     You can join any machine, physical or virtual, to this swarm, using the same docker swarm join command you used on myvm2, and capacity will be added to your cluster.

    Just run docker stack deploy afterwards, and your app will take advantage of the new resources.

    Cleanup and reboot

    Stacks and swarms

    You can tear down the stack with docker stack rm

    docker stack rm getstartedlab
    

    Keep the swarm or remove it?

    At some point later, you can remove this swarm if you want to with docker-machine ssh myvm2 "docker swarm leave" on the worker and docker-machine ssh myvm1 "docker swarm leave --force" on the manager, but you’ll need this swarm for part 5, so please keep it around for now.

    Unsetting docker-machine shell variable settings

    You can unset the docker-machine environment variables in your current shell with the following command:

    eval $(docker-machine env -u)
    

    This disconnects the shell from docker-machine created virtual machines, and allows you to continue working in the same shell, now using native docker commands (for example, on Docker for Mac or Docker for Windows). To learn more, see the Machine topic on unsetting environment variables. 

    Restarting Docker machines

    If you shut down your local host, Docker machines will stop running. You can check the status of machines by running docker-machine ls.

    $ docker-machine ls
    NAME    ACTIVE   DRIVER       STATE     URL   SWARM   DOCKER    ERRORS
    myvm1   -        virtualbox   Stopped                 Unknown
    myvm2   -        virtualbox   Stopped                 Unknown
    

    To restart a machine that’s stopped, run:

    docker-machine start <machine-name>
    
    
    $ docker-machine start myvm1
    Starting "myvm1"...
    (myvm1) Check network to re-create if needed...
    (myvm1) Waiting for an IP...
    Machine "myvm1" was started.
    Waiting for SSH to be available...
    Detecting the provisioner...
    Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
    
    $ docker-machine start myvm2
    Starting "myvm2"...
    (myvm2) Check network to re-create if needed...
    (myvm2) Waiting for an IP...
    Machine "myvm2" was started.
    Waiting for SSH to be available...
    Detecting the provisioner...
    Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
    
    docker-machine create --driver virtualbox myvm1 # Create a VM (Mac, Win7, Linux)
    docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1 # Win10
    docker-machine env myvm1                # View basic information about your node
    docker-machine ssh myvm1 "docker node ls"         # List the nodes in your swarm
    docker-machine ssh myvm1 "docker node inspect <node ID>"        # Inspect a node
    docker-machine ssh myvm1 "docker swarm join-token -q worker"   # View join token
    docker-machine ssh myvm1   # Open an SSH session with the VM; type "exit" to end
    docker node ls                # View nodes in swarm (while logged on to manager)
    docker-machine ssh myvm2 "docker swarm leave"  # Make the worker leave the swarm
    docker-machine ssh myvm1 "docker swarm leave -f" # Make master leave, kill swarm
    docker-machine ls # list VMs, asterisk shows which VM this shell is talking to
    docker-machine start myvm1            # Start a VM that is currently not running
    docker-machine env myvm1      # show environment variables and command for myvm1
    eval $(docker-machine env myvm1)         # Mac command to connect shell to myvm1
    & "C:Program FilesDockerDockerResourcesindocker-machine.exe" env myvm1 | Invoke-Expression   # Windows command to connect shell to myvm1
    docker stack deploy -c <file> <app>  # Deploy an app; command shell must be set to talk to manager (myvm1), uses local Compose file
    docker-machine scp docker-compose.yml myvm1:~ # Copy file to node's home dir (only required if you use ssh to connect to manager and deploy the app)
    docker-machine ssh myvm1 "docker stack deploy -c <file> <app>"   # Deploy an app using ssh (you must have first copied the Compose file to myvm1)
    eval $(docker-machine env -u)     # Disconnect shell from VMs, use native docker
    docker-machine stop $(docker-machine ls -q)               # Stop all running VMs
    docker-machine rm $(docker-machine ls -q) # Delete all VMs and their disk images
    

      

  • 相关阅读:
    【STM32F429】第6章 RL-USB调试组件使用方法(重要)
    【STM32F407】第6章 RL-USB调试组件使用方法(重要)
    IAR9.10下载(2021-02-23)
    【STM32H7】第5章 RL-USB协议栈移植(MDK AC6)
    【STM32F429】第5章 RL-USB移植(MDK AC6)
    【STM32F407】第5章 RL-USB移植(MDK AC6)
    【STM32H7】第4章 RL-USB移植(MDK AC5)
    【STM32F429】第4章 RL-USB移植(MDK AC5)
    【STM32F407】第4章 RL-USB移植(MDK AC5)
    【STM32H7】第3章 RL-USB协议栈介绍
  • 原文地址:https://www.cnblogs.com/panpanwelcome/p/8093284.html
Copyright © 2011-2022 走看看