zoukankan      html  css  js  c++  java
  • ENV-docker- 快速部署ES集群 spark集群

    1) 拉下来 ES集群  spark集群 两套快速部署环境, 并只用docker跑起来,并保存到私库。

    2)弄清楚怎么样打包 linux镜像(或者说制作)。

    3)试着改一下,让它们跑在集群里面。

    4) 弄清楚

              Dockerfile 怎么制作镜像

              docker-compose 里面的启动项 及 与 mesos里面怎么对应起来。

    5)写一个spack程序

         及在ES环境里造少量数据查一下。

    ES 环境docker :参考贴子来跑的,https://cloud.tencent.com/developer/article/1098820

    记录下主要步骤:

        1)拉镜像到私库 

        2)内网虚拟机环境,禁用其它服务没有就跳过,ip修改,免密,等。

    systemctl disable mesos-slave
    systemctl stop mesos-slave
    systemctl disable zookeeper 
    systemctl stop zookeeper
    systemctl disable mesos-master  
    systemctl stop mesos-master 
    systemctl disable marathon 
    systemctl stop marathon
    systemctl enable docker
    systemctl start docker
    systemctl daemon-reload
    systemctl restart docker

        3)从私库拉镜象,不需要每个全拉下来,拉对应的。

    docker pull 192.168.1.153:31809/kafka.new.es
    docker pull 192.168.1.153:31809/zookeeper.new.es
    
    
    docker pull 192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
    docker pull 192.168.1.153:31809/elastic/kibana:5.6.8.new.es
    docker pull 192.168.1.153:31809/elastic/logstash:5.6.8.new.es
     
    docker pull 192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es

        4)

           写脚本主要有   cp.test.sh  和 docker.run.sh

           cp.test.sh 主要是测试 脚本流程 及 处理免密登陆第一次时需要输入yes问题

           4.1

           将 cp.test.sh 放在任意一台 ,将自己发送给所有节点,在每台这样操作,输入yes.

           再发一次确保没漏

          4.2

          执行 docker.run.sh 启动容器

                 docker.run.sh主要做和三件事

                 1)将自己发给其它节点

                 2)处理自己 相关docker 的启动

                 3)通过ssh 向其它节点发送命令 在其它节点执行 docker.run.sh(当然这时根据参数是不会再次shh其它节点)

                总之就是  docker.run.sh 将会停止和删除所有节点中原来的所有容器,并且启动该启动的容器。

    另外附上相关操作脚本。

    ip有修改

      -------------------公共库---------------------------------------
      
    docker pull jenkins
    docker tag jenkins 192.168.1.153:31809/jenkins:latest
    docker push 192.168.1.153:31809/jenkins:latest
    docker rmi jenkins
    docker rmi 192.168.1.153:31809/jenkins
     
    docker pull mysql
    docker tag mysql 192.168.1.153:31809/mysql:latest
    docker push 192.168.1.153:31809/mysql:latest
    docker rmi mysql
    docker rmi 192.168.1.153:31809/mysql
    
    docker pull tomcat
    docker tag tomcat 192.168.1.153:31809/tomcat:latest
    docker push 192.168.1.153:31809/tomcat:latest
    docker rmi tomcat
    docker rmi 192.168.1.153:31809/tomcat
    
    docker pull maven
    docker tag maven 192.168.1.153:31809/maven:latest
    docker push 192.168.1.153:31809/maven:latest
    docker rmi maven
    docker rmi 192.168.1.153:31809/maven
    
      -------------------公共库---------------------------------------
      
      
      -----------------快速部署ES集群.txt--------------------------
        https://cloud.tencent.com/developer/article/1098820
    docker pull zookeeper 
    docker tag zookeeper 192.168.1.153:31809/zookeeper.new.es
    docker push 192.168.1.153:31809/zookeeper.new.es
    docker rmi zookeeper
    docker rmi 192.168.1.153:31809/zookeeper.new.es
     
    docker pull wurstmeister/kafka 
    docker tag wurstmeister/kafka  192.168.1.153:31809/kafka.new.es
    docker push  192.168.1.153:31809/kafka.new.es
    docker rmi wurstmeister/kafka
    docker rmi 192.168.1.153:31809/kafka.new.es
    
     
    docker pull docker.elastic.co/elasticsearch/elasticsearch:5.6.8
    docker tag docker.elastic.co/elasticsearch/elasticsearch:5.6.8 192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
    docker push 192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
    docker rmi docker.elastic.co/elasticsearch/elasticsearch:5.6.8 
    docker rmi 192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
    
    docker pull docker.elastic.co/kibana/kibana:5.6.8
    docker tag docker.elastic.co/kibana/kibana:5.6.8 192.168.1.153:31809/elastic/kibana:5.6.8.new.es
    docker push 192.168.1.153:31809/elastic/kibana:5.6.8.new.es
    docker rmi  docker.elastic.co/kibana/kibana:5.6.8
    docker rmi 192.168.1..153:31809/elastic/kibana:5.6.8.new.es
    
    docker pull docker.elastic.co/logstash/logstash:5.6.8
    docker tag docker.elastic.co/logstash/logstash:5.6.8 192.168.1.153:31809/elastic/logstash:5.6.8.new.es
    docker push 192.168.1.153:31809/elastic/logstash:5.6.8.new.es
    docker rmi  docker.elastic.co/logstash/logstash:5.6.8
    docker rmi 192.168.1.153:31809/elastic/logstash:5.6.8.new.es
      
      -----------------快速部署ES集群.txt------------------------------
    
    
       
     -----------------spark:2.2镜像--------------------------
     https://www.cnblogs.com/hongdada/p/9475406.html
    docker pull singularities/spark
    docker tag singularities/spark 192.168.1.153:31809/singularities/spark
    docker push 192.168.1.153:31809/singularities/spark
    docker rmi singularities/spark
    docker rmi 192.168.1.153:31809/singularities/spark
    ----------------- spark:2.2镜像------------------------------

    -----------------centos7 镜像--------------------------
    https://www.jianshu.com/p/4801bb7ab9e0
    docker tag 7 192.168.1.153:31809/centos7
    docker push 192.168.1.153:31809/centos7
    docker rmi 7
    docker rmi 192.168.1.153:31809/centos7
    ----------------- centos7 镜像------------------------------

    -----------------singularities/hadoop 2.8 镜像--------------------------
    https://www.jianshu.com/p/4801bb7ab9e0
    docker tag singularities/hadoop:2.8 192.168.1.153:31809/singularities/hadoop.2.8
    docker push 192.168.1.153:31809/singularities/hadoop.2.8
    docker rmi singularities/hadoop:2.8
    docker rmi 192.168.1.153:31809/singularities/hadoop.2.8
    ----------------- singularities/hadoop 2.8 镜像 ------------------------------

      
      
    --docker images
    --查看库
    curl -X GET http://192.168.1.153:31809/v2/_catalog {"repositories":["nginx"]}
    --查看库
    curl -X GET http://192.168.1.153:31809/v2/_catalog  {"name":"nginx","tags":["latest"]}

    免密登陆测试脚本

    cp.test.sh

    flag=$1
    getip()  
    {  
       ifconfig|grep 192|awk '{print $2}'
    }  
    
    ip=`getip`
    echo "salf IP:" $ip
    cpToOtherVM()
    {
        if [[ "${flag}" == "y" ]]; then  
    
                 if [[ "${ip}" != "$1" ]]; then  
    
                    scp -r /z-hl-c53cc450-62bf-4b65-b7f2-432e2aae9c62-v5.json $1:/
    
                 fi
        fi
       
    }
    
    execOtherVmShell()
    {
        if [[ "${flag}" == "y" ]]; then  
    
            if [[ "${ip}" != "$1" ]]; then 
    
               ssh root@$1 "sh /cp.test.sh"
    
            fi
        fi
    }
    
    echo "copy to"
    cpToOtherVM "192.168.1.100"
    cpToOtherVM "192.168.1.101"
    cpToOtherVM "192.168.1.102"
    sleep 1
    cpToOtherVM "192.168.1.110"
    cpToOtherVM "192.168.1.111"
    cpToOtherVM "192.168.1.112"
    sleep 1
    cpToOtherVM "192.168.1.120"
    cpToOtherVM "192.168.1.121"
    cpToOtherVM "192.168.1.122"
    cpToOtherVM "192.168.1.123"
    sleep 3
    echo "exec other"
    execOtherVmShell "192.168.1.100"
    execOtherVmShell "192.168.1.101"
    execOtherVmShell "192.168.1.102"
    
    execOtherVmShell "192.168.1.110"
    execOtherVmShell "192.168.1.111"
    execOtherVmShell "192.168.1.112"
    
    execOtherVmShell "192.168.1.120"
    execOtherVmShell "192.168.1.121"
    execOtherVmShell "192.168.1.122"
    execOtherVmShell "192.168.1.123"
    
    echo "exec salf action"

    运行容器脚本

    docker.run.sh

    [root@docker-master3 /]# cat docker.run.sh 
    
    flag=$1
    getip()
    {
       ifconfig|grep 192|awk '{print $2}'
    }
    
    ip=`getip`
    echo "salf IP:" $ip
    cpToOtherVM()
    {
        if [[ "${flag}" == "y" ]]; then
    
                 if [[ "${ip}" != "$1" ]]; then
    
                    scp -r /etc/sysctl.conf $1:/etc/sysctl.conf
                    scp -r /docker.run.sh $1:/docker.run.sh
    
                 fi
        fi
    
    }
    
    execOtherVmShell()
    {
        if [[ "${flag}" == "y" ]]; then
    
            if [[ "${ip}" != "$1" ]]; then
    
               ssh root@$1 "docker ps -a |grep 192.168.1. |awk -F ' ' '{print $1}'| xargs -i docker kill {}"
               echo "stop all docker"
               sleep 2
               ssh root@$1 "docker ps -a |grep 192.168.1. |awk -F ' ' '{print $1}'| xargs -i docker rm {}"
               echo "rm all docker"
               sleep 5
               ssh root@$1 "sh /docker.run.sh"
    
            fi
        fi
    }
    
    echo "copy to"
    cpToOtherVM "192.168.1.100"
    cpToOtherVM "192.168.1.101"
    cpToOtherVM "192.168.1.102"
    sleep 1
    cpToOtherVM "192.168.1.110"
    cpToOtherVM "192.168.1.111"
    cpToOtherVM "192.168.1.112"
    sleep 1
    cpToOtherVM "192.168.1.120"
    cpToOtherVM "192.168.1.121"
    cpToOtherVM "192.168.1.122"
    cpToOtherVM "192.168.1.123"
    sleep 1
    
    echo "exec salf action" 
    docker ps -a |grep 192.168.1. |awk -F ' ' '{print $1}'| xargs -i docker kill {}
    sleep 2
    docker ps -a |grep 192.168.1. |awk -F ' ' '{print $1}'| xargs -i docker rm {}
    sleep 3
    
    function runZookeeper()
    {
    echo "exec runZookeeper" $1 $2
    # 启动
    docker run --name zookeeper 
      --net=host 
      --restart always 
      -v /data/zookeeper:/data/zookeeper 
      -e ZOO_PORT=2181 
      -e ZOO_DATA_DIR=/data/zookeeper/data 
      -e ZOO_DATA_LOG_DIR=/data/zookeeper/logs 
      -e ZOO_MY_ID=$2 
      -e ZOO_SERVERS="server.1=192.168.1.100:2888:3888 server.2=192.168.1.101:2888:3888 server.3=192.168.1.102:2888:3888" 
      -d 192.168.1.153:31809/zookeeper.new.es
               sleep 2
    }
    
    function runKafka()
    {
    
              echo "exec runKafka" $1 $2
    # 机器有11块盘,这里都用起来
    mkdir -p /data{1..11}/kafka
     
    # 启动
    docker run --name kafka 
            --net=host 
            --volume /data1:/data1 
            --volume /data2:/data2 
            --volume /data3:/data3 
            --volume /data4:/data4 
            --volume /data5:/data5 
            --volume /data6:/data6 
            --volume /data7:/data7 
            --volume /data8:/data8 
            --volume /data9:/data9 
            --volume /data10:/data10 
            --volume /data11:/data11 
            -e KAFKA_BROKER_ID=$2 
            -e KAFKA_PORT=9092 
            -e KAFKA_HEAP_OPTS="-Xms8g -Xmx8g" 
            -e KAFKA_HOST_NAME=$1 
            -e KAFKA_ADVERTISED_HOST_NAME=$1 
            -e KAFKA_LOG_DIRS=/data1/kafka,/data2/kafka,/data3/kafka,/data4/kafka,/data5/kafka,/data6/kafka,/data7/kafka,/data8/kafka,/data9/kafka,/data10/kafka,/data11/kafka 
            -e KAFKA_ZOOKEEPER_CONNECT="192.168.1.100:2181,192.168.1.101:2181,192.168.1.102:2181" 
            -d 192.168.1.153:31809/kafka.new.es
               sleep 2
    }
    
    
    
    function runMaster()
    {
    echo "exec runMaster" $1 $2
    #!/bin/bash
    # 删除已退出的同名容器
    #docker ps -a | grep es_master |egrep "Exited|Created" | awk '{print $1}'|xargs -i% docker rm -f % 2>/dev/null
    # 启动
    docker run --name es_master 
            -d --net=host 
            --restart=always 
            --privileged=true 
            --ulimit nofile=655350 
            --ulimit memlock=-1 
            --memory=1G 
            --memory-swap=-1 
            --cpus=0.5 
            --volume /data:/data 
            --volume /etc/localtime:/etc/localtime 
            -e TERM=dumb 
            -e ES_JAVA_OPTS="-Xms8g -Xmx8g" 
            -e cluster.name="iyunwei" 
            -e node.name="MASTER-"$2 
            -e node.master=true 
            -e node.data=false 
            -e node.ingest=false 
            -e node.attr.rack="0402-K03" 
            -e discovery.zen.ping.unicast.hosts="192.168.1.110:9301,192.168.1.111:9301,192.168.1.112:9301,192.168.1.110:9300,192.168.1.112:9300,192.168.1.113:9300,192.168.1.120:9300,192.168.1.121:9300,192.168.1.122:9300,192.168.1.123:9300" 
            -e discovery.zen.minimum_master_nodes=2 
            -e gateway.recover_after_nodes=5 
            -e network.host=0.0.0.0 
            -e transport.tcp.port=9301 
            -e http.port=9201 
            -e path.data="/data/iyunwei/master" 
            -e path.logs=/data/elastic/logs 
            -e bootstrap.memory_lock=true 
            -e bootstrap.system_call_filter=false 
            -e indices.fielddata.cache.size="25%" 
            192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
               sleep 2
    
    } 
    
    
    
    function runClient()
    {
    echo "exec runClient" $1 $2
    #!/bin/bash
    #docker ps -a | grep es_client |egrep "Exited|Created" | awk '{print $1}'|xargs -i% docker rm -f % 2>/dev/null
    docker run --name es_client 
            -d --net=host 
            --restart=always 
            --privileged=true 
            --ulimit nofile=655350 
            --ulimit memlock=-1 
            --memory=1G 
            --memory-swap=-1 
            --cpus=0.5 
            --volume /data:/data 
            --volume /etc/localtime:/etc/localtime 
            -e TERM=dumb 
            -e ES_JAVA_OPTS="-Xms31g -Xmx31g" 
            -e cluster.name="iyunwei" 
            -e node.name="CLIENT-"$2 
            -e node.master=false 
            -e node.data=false 
            -e node.attr.rack="0402-K03" 
            -e discovery.zen.ping.unicast.hosts="192.168.1.110:9301,192.168.1.111:9301,192.168.1.112:9301,192.168.1.110:9300,192.168.1.112:9300,192.168.1.113:9300,192.168.1.120:9300,192.168.1.121:9300,192.168.1.122:9300,192.168.1.123:9300" 
            -e discovery.zen.minimum_master_nodes=2 
            -e gateway.recover_after_nodes=2 
            -e network.host=0.0.0.0 
            -e transport.tcp.port=9300 
            -e http.port=9200 
            -e path.data="/data/iyunwei/client" 
            -e path.logs=/data/elastic/logs 
            -e bootstrap.memory_lock=true 
            -e bootstrap.system_call_filter=false 
            -e indices.fielddata.cache.size="25%" 
            192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
               sleep 2
    
    } 
    
    
    function runDATA ()
    {
    echo "exec runDATA" $1 $2
    #!/bin/bash
    #docker ps -a | grep es_data |egrep "Exited|Created" | awk '{print $1}'|xargs -i% docker rm -f % 2>/dev/null
    docker run --name es_data 
            -d --net=host 
            --restart=always 
            --privileged 
            --ulimit nofile=655350 
            --ulimit memlock=-1 
            --volume /data:/data 
            --volume /data1:/data1 
            --volume /data2:/data2 
            --volume /data3:/data3 
            --volume /data4:/data4 
            --volume /data5:/data5 
            --volume /data6:/data6 
            --volume /data7:/data7 
            --volume /data8:/data8 
            --volume /data9:/data9 
            --volume /data10:/data10 
            --volume /data11:/data11 
            --volume /etc/localtime:/etc/localtime 
            --ulimit memlock=-1 
            -e TERM=dumb 
            -e ES_JAVA_OPTS="-Xms31g -Xmx31g" 
            -e cluster.name="iyunwei" 
            -e node.name="DATA-"$2 
            -e node.master=false 
            -e node.data=true 
            -e node.ingest=false 
            -e node.attr.rack="0402-Q06" 
            -e discovery.zen.ping.unicast.hosts="192.168.1.110:9301,192.168.1.111:9301,192.168.1.112:9301,192.168.1.110:9300,192.168.1.112:9300,192.168.1.113:9300,192.168.1.120:9300,192.168.1.121:9300,192.168.1.122:9300,192.168.1.123:9300" 
            -e discovery.zen.minimum_master_nodes=2 
            -e gateway.recover_after_nodes=2 
            -e network.host=0.0.0.0 
            -e http.port=9200 
            -e path.data="/data1/iyunwei/data,/data2/iyunwei/data,/data3/iyunwei/data,/data4/iyunwei/data,/data5/iyunwei/data,/data6/iyunwei/data,/data7/iyunwei/data,/data8/iyunwei/data,/data9/iyunwei/data,/data10/iyunwei/data,/data11/iyunwei/data,/data12/iyunwei/data" 
            -e path.logs=/data/elastic/logs 
            -e bootstrap.memory_lock=true 
            -e bootstrap.system_call_filter=false 
            -e indices.fielddata.cache.size="25%" 
            192.168.1.153:31809/elastic/elasticsearch:5.6.8.new.es
               sleep 2
    
    } 
    
    
    function runKibana()
    {
    echo "exec runKibana" $1 $2
    #!/bin/bash
    #docker ps -a | grep kibana | egrep "Exited|Create" | awk '{print $1}'|xargs -i% docker rm -f % 2>/dev/null
    docker run --name kibana 
            --restart=always 
            -d --net=host 
            -v /data:/data 
            -v /etc/localtime:/etc/localtime 
            --privileged 
            -e TERM=dumb 
            -e SERVER_HOST=0.0.0.0 
            -e SERVER_PORT=5601 
            -e SERVER_NAME=Kibana-$2 
            -e ELASTICSEARCH_URL=http://localhost:9200 
            -e ELASTICSEARCH_USERNAME=elastic 
            -e ELASTICSEARCH_PASSWORD=changeme 
            -e XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED=false 
            -e LOG_FILE=/data/elastic/logs/kibana.log 
            192.168.1.153:31809/elastic/kibana:5.6.8.new.es
               sleep 2
    
    } 
    
    function runLogstash()
    {
     echo "exec runLogstash" $1 $2
    #!/bin/bash
    #docker ps -a | grep logstash |egrep "Exited|Created" | awk '{print $1}'|xargs -i% docker rm -f % 2>/dev/null
    docker run --name logstash 
            -d --net=host 
            --restart=always 
            --privileged 
            --ulimit nofile=655350 
            --ulimit memlock=-1 
            -e ES_JAVA_OPTS="-Xms16g -Xmx16g" 
            -e TERM=dumb 
            --volume /etc/localtime:/etc/localtime 
            --volume /data/elastic/config:/usr/share/logstash/config 
            --volume /data/elastic/config/pipeline:/usr/share/logstash/pipeline 
            --volume /data/elastic/logs:/usr/share/logstash/logs 
            192.168.1.153:31809/elastic/logstash:5.6.8.new.es
               sleep 2
    
    } 
    
    function cfgkafka() 
    {  
            if [[ "${ip}" = "$1" ]]; then
    
             echo "exec cfgkafka" $1 $2
             
               mkdir -p /data/zookeeper  
               
           runZookeeper $1 $2 
               
           runKafka $1 $2
             
        fi
    }
    
    function cfgMaster() 
    {  
            if [[ "${ip}" = "$1" ]]; then
          
               echo "exec cfgMaster" $1 $2
               
                  mkdir -p /data/iyunwei/master
              chown -R 1000:1000 /data/iyunwei
                      
                      mkdir -p /data/iyunwei/client
              chown -R 1000:1000 /data/iyunwei
                      
                      runMaster $1 $2
                      
                      runClient $1 $2
                      
                      runKibana $1 $2
                      
                      runLogstash $1 $2
        fi
    }
    
    function cfgDATA() 
    {  
            if [[ "${ip}" = "$1" ]]; then
          
               echo "exec cfgDATA" $1 $2 
               
                  mkdir -p /data{1..12}/iyunwei/data
              chown -R 1000:1000 /data{1..12}/iyunwei
             
                  runDATA $1 $2
             
        fi
    }
     
    
    
    index=0
    for kafkaClusterIP in "192.168.1.100"  "192.168.1.101" "192.168.1.102"
    do 
    
      index=$(($index+1))   
      
      echo "cfgkafka" $kafkaClusterIP $index   
      
      cfgkafka  $kafkaClusterIP $index 
      
    done
     
    
    #Master
    MasterIndex=0
    for MasterIP in in "192.168.1.110"  "192.168.1.111" "192.168.1.112" 
    do 
    
      MasterIndex=$(($MasterIndex+1))   
      
      echo "cfgMaster" $MasterIP $MasterIndex 
      
      cfgMaster $MasterIP $MasterIndex 
      
    done
     
    
    #DATA  
    DATAIndex=0
    for DATAIP in "192.168.1.120"  "192.168.1.121" "192.168.1.122" "192.168.1.123"
    do 
      DATAIndex=$(($DATAIndex+1)) 
      
      echo "cfgDATA" $DATAIP  $DATAIndex
      
      cfgDATA $DATAIP $DATAIndex 
    done
    sleep 3
    
    echo "exec other vm action"
    execOtherVmShell "192.168.1.100"
    execOtherVmShell "192.168.1.101"
    execOtherVmShell "192.168.1.102"
    
    execOtherVmShell "192.168.1.110"
    execOtherVmShell "192.168.1.111"
    execOtherVmShell "192.168.1.112" 
    
    execOtherVmShell "192.168.1.120" 
    execOtherVmShell "192.168.1.121" 
    execOtherVmShell "192.168.1.122" 
    execOtherVmShell "192.168.1.123" 
    
    curl -XPUT http://192.168.1.111:9200/_license?acknowledge=true -d @z-hl-c53cc450-62bf-4b65-b7f2-432e2aae9c62-v5.json -uelastic:changeme

    内核参数 cat /etc/sysctl.conf

    vm.max_map_count = 655360
    vm.swappiness = 1

    ip:规划与原贴不同

        为保证通信方便,不组建复杂网络环境,全部放在同一网段。 

            192.168.1.100-102 相同为kafka

            192.168.1.110-112 对应  192.168.2.100-102

            192.168.1.120-123 对应  192.168.3.100-103

      现在,容器布署成功

            kafka镜像配置文件有权限问题

            es-master主面kibana 可访问 但登不进云

                   http://192.168.1.110:5601/login?next=%2F#?_g=()

                 应该是没有注册的问题

         curl 命令注册时端口不能访问,需要看一下

    容器 ps -a 全部者在

    容器 ps kafka没有运行

    因为ip和原镜像的ip不一样,有可能要进入容器里面去拿配置文件,修改对应ip
    拿出来后还要重新打镜象才行,目前不了解要哪些配置要改
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.1.151    docker-slave1    docker-slave1.com
    192.168.1.152    docker-slave2    docker-slave2.com
    192.168.1.153    docker-slave3    docker-slave3.com
    192.168.1.161    docker-master1    docker-master1.com
    192.168.1.162    docker-master2    docker-master3.com
    192.168.1.163    docker-master3    docker-master3.com
    192.168.1.110    es-master1    es-master1.com
    192.168.1.111    es-master2    es-master2.com
    192.168.1.112    es-master3    es-master3.com
    192.168.1.110    es-kafka1    es-kafka1.com
    192.168.1.101    es-kafka2    es-kafka2.com
    192.168.1.102    es-kafka3    es-kafka3.com
    192.168.1.120    es-data1    es-data1.com
    192.168.1.121    es-data2    es-data2.com
    192.168.1.122    es-data3    es-data3.com
    192.168.1.124    es-data4    es-data4.com

           

  • 相关阅读:
    VS2013搭建wxWidgets开发环境
    LinuxSystemProgramming-Syllabus
    Python入门2(Python与C语言语法的不同、Notepad++运行Python代码)
    Python入门1(简介、安装)
    面试题收集---grep和find的区别
    浅拷贝 和深拷贝
    使用 system.io.filesysteminfo 来查找文件。
    使用FileSystemWatcher捕获系统文件状态
    system.io.file创建
    Javascript诞生记 [转载]
  • 原文地址:https://www.cnblogs.com/heling/p/10387035.html
Copyright © 2011-2022 走看看