zoukankan      html  css  js  c++  java
  • Elasticsearch 负载均衡集群

    Elasticsearch 负载均衡


    前言:
    master node #维护集群状态和配置集群,负载应尽量小
    data node #处理数据(CRUD, search, and aggregations)
    client node #负载均衡器(route requests, handle the search reduce phase, and distribute bulk indexing)
    tribe node #特殊的client node, 可以在客户端和多个集群间交叉作业


    环境:
    CentOS7.1 x64
    elasticsearch-2.3.2

    说明:小集群或本地测试可以不用区分master node,data node,client node。
    但生产环境为了保证最大的可伸缩性,官方建议不同的类型节点加以区分,默认情况的elasticsearch既是master node,也是data node。

    官方推荐至少3节点以避免脑裂
    By default, each index in Elasticsearch is allocated 5 primary shards and 1 replica which means that if you have at least two nodes in your cluster, your index will have 5 primary shards and another 5 replica shards (1 complete replica) for a total of 10 shards per index.

    本实验采用不同类型的节点集群(client x1, master x3, data x2)
    ela-client.example.com:192.168.8.10(client node)
    ela-master1.example.com:192.168.8.101(master node)
    ela-master2.example.com:192.168.8.102(master node)
    ela-master3.example.com:192.168.8.103(master node)
    ela-data1.example.com:192.168.8.201(data node)
    ela-data2.example.com:192.168.8.202(data node)


    一.安装(略)
    在所有节点上安装相同版本的elasticsearch 


    二.系统资源配置(所有节点)
    1.调整内核参数
    swapoff -a #同时修改/etc/fstab

    cat >>/usr/lib/sysctl.d/00-system.conf <<HERE


    vm.swappiness = 1

    vm.max_map_count = 262144

    HERE

    sysctl -p /usr/lib/sysctl.d/00-system.conf

    2.调整资源限制

    https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html


    JMX内存大小

    cat >>/etc/profile <<HERE

    export ES_MIN_MEM=1g

    export ES_MAX_MEM=1g

    HERE

    source /etc/profile

    或直接修改环境变更定义脚本/opt/elasticsearch-2.3.2/bin/elasticsearch.in.sh


    文件描述符数量及memlock

    cat >>/etc/security/limits.conf <<HERE

    elasticsearch soft nofile 65536

    elasticsearch hard nofile 65536

    elasticsearch soft memlock unlimited

    elasticsearch hard memlock unlimited

    HERE


     

    三.elasticsearch配置(config/elasticsearch.yml)

    创建集群名为elasticsearch_cluster的集群(默认的集群名为elasticsearch)
    1.client node
    ela-client.example.com:192.168.8.10(client node)

    mv /opt/elasticsearch-2.3.2/config/elasticsearch.yml{,.default}

    cat >/opt/elasticsearch-2.3.2/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: false

    node.data: false

    path.data: /opt/elasticsearch-2.3.2/data

    path.logs: /opt/elasticsearch-2.3.2/logs

    bootstrap.mlockall: true

    network.host: 127.0.0.1,192.168.8.10

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE


    2.master node
    ela-master1.example.com:192.168.8.101(master node)

    mv /opt/elasticsearch-2.3.2/config/elasticsearch.yml{,.default}

    cat >/opt/elasticsearch-2.3.2/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: true

    node.data: false

    path.data: /opt/elasticsearch-2.3.2/data

    path.logs: /opt/elasticsearch-2.3.2/logs

    bootstrap.mlockall: true

    network.host: 127.0.0.1,192.168.8.101

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    ela-master2.example.com:192.168.8.102(master node)

    mv /opt/elasticsearch-2.3.2/config/elasticsearch.yml{,.default}

    cat >/opt/elasticsearch-2.3.2/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: true

    node.data: false

    path.data: /opt/elasticsearch-2.3.2/data

    path.logs: /opt/elasticsearch-2.3.2/logs

    bootstrap.mlockall: true

    network.host: 127.0.0.1,192.168.8.102

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    ela-master3.example.com:192.168.8.103(master node)

    mv /opt/elasticsearch-2.3.2/config/elasticsearch.yml{,.default}

    cat >/opt/elasticsearch-2.3.2/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: true

    node.data: false

    path.data: /opt/elasticsearch-2.3.2/data

    path.logs: /opt/elasticsearch-2.3.2/logs

    bootstrap.mlockall: true

    network.host: 127.0.0.1,192.168.8.103

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE

    3.data node
    ela-data1.example.com:192.168.8.201(data node)

    mv /opt/elasticsearch-2.3.2/config/elasticsearch.yml{,.default}

    cat >/opt/elasticsearch-2.3.2/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: false

    node.data: true

    path.data: /opt/elasticsearch-2.3.2/data

    path.logs: /opt/elasticsearch-2.3.2/logs

    bootstrap.mlockall: true

    network.host: 127.0.0.1,192.168.8.201

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    ela-data2.example.com:192.168.8.202(data node)

    mv /opt/elasticsearch-2.3.2/config/elasticsearch.yml{,.default}

    cat >/opt/elasticsearch-2.3.2/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    node.master: false

    node.data: true

    path.data: /opt/elasticsearch-2.3.2/data

    path.logs: /opt/elasticsearch-2.3.2/logs

    bootstrap.mlockall: true

    network.host: 127.0.0.1,192.168.8.202

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE

    提示: 配置文件还可以如下格式
    cluster:
        name: elasticsearch_cluster
    node:
        name: ${HOSTNAME}

    node:

        master: true

    node:

        data: false

    path:

        data: /opt/elasticsearch-2.3.2/data

    path:

        logs: /opt/elasticsearch-2.3.2/logs

    bootstrap:

        mlockall: true

    network:

        host: 192.168.8.101

    http:

        port: 9200

    index:
        refresh_interval: 5s
    敏感关键词可以通过指定${prompt.text} or ${prompt.secret}等,在elasticsearch启动时手动输入
    node.name: ${prompt.text}


    四.启动Elasticsearch节点

    su - elasticsearch -c "/opt/elasticsearch-2.3.2/bin/elasticsearch -d -p /tmp/elasticsearch.pid"

    停止
    kill $(cat /tmp/elasticsearch.pid)

    还可以通过命令行传参
    su - elasticsearch -c "/opt/elasticsearch-2.3.2/bin/elasticsearch -Des.cluster.name=elasticsearch_cluster -Des.node.name=elastic1.example.com -Des.path.data=/opt/elasticsearch-2.3.2/data -Des.path.logs=/opt/elasticsearch-2.3.2/logs -Des.bootstrap.mlockall=true -Des.network.host="127.0.0.1,192.168.8.101" -Des.http.port=9200 -Des.index.refresh_interval=5s"
    su - elasticsearch -c "/opt/elasticsearch-2.3.2/bin/elasticsearch 

    --cluster.name elasticsearch_cluster --node.name elastic1.example.com --path.data /opt/elasticsearch-2.3.2/data --path.logs /opt/elasticsearch-2.3.2/logs --bootstrap.mlockall true --network.host "127.0.0.1,192.168.8.101" --http.port 9200 --index.refresh_interval 5s"


    集群模式下默认监听在9300端口

    [root@ela-master1 ~]# netstat -tunlp|grep java

    tcp            0 192.168.8.101:9200      0.0.0.0:*               LISTEN      2739/java           

    tcp            0 127.0.0.1:9200          0.0.0.0:*               LISTEN      2739/java           

    tcp            0 192.168.8.101:9300      0.0.0.0:*               LISTEN      2739/java           

    tcp            0 127.0.0.1:9300          0.0.0.0:*               LISTEN      2739/java 


    [root@ela-master1 ~]# curl 'http://localhost:9200/_nodes/process?pretty'

    {

      "cluster_name" : "elasticsearch_cluster",

      "nodes" : {

        "hT7NA2G9S72TW3G_Xhip8A" : {

          "name" : "ela-master1.example.com",

          "transport_address" : "192.168.8.101:9300",

          "host" : "192.168.8.101",

          "ip" : "192.168.8.101",

          "version" : "2.3.2",

          "build" : "b9e4a6a",

          "http_address" : "192.168.8.101:9200",

          "attributes" : {

            "data" : "false",

            "master" : "true"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 2739,

            "mlockall" : true

          }

        }

      }

    }

    [root@ela-master1 ~]# curl 'http://localhost:9200/_nodes/stats/process?pretty'

    {

      "cluster_name" : "elasticsearch_cluster",

      "nodes" : {

        "hT7NA2G9S72TW3G_Xhip8A" : {

          "timestamp" : 1462608578095,

          "name" : "ela-master1.example.com",

          "transport_address" : "192.168.8.101:9300",

          "host" : "192.168.8.101",

          "ip" : [ "192.168.8.101:9300", "NONE" ],

          "attributes" : {

            "data" : "false",

            "master" : "true"

          },

          "process" : {

            "timestamp" : 1462608578095,

            "open_file_descriptors" : 113,

            "max_file_descriptors" : 65536,

            "cpu" : {

              "percent" : 0,

              "total_in_millis" : 14120

            },

            "mem" : {

              "total_virtual_in_bytes" : 3714117632

            }

          }

        }

      }

    }

    提示:JMX调大后,如果进程被OOM,请检查虚机内存大小,本人测试是使用的512MB内存的主机,JMX最小最大都设置的1GB,结果一启动就OOM,后来增加内存后就正常了,更多的报错请查看日志/opt/elasticsearch-2.3.2/logs/elasticsearch_cluster.log


    启动其它节点(略)




    五.创建集群(unicast模式)

     

    提示:multicast组播模式需要安装插件

    /opt/elasticsearch-2.3.2/bin/plugin install discovery-multicast


    通过discovery模块来将节点加入集群

    在以上节点的配置文件/opt/elasticsearch-2.3.2/config/elasticsearch.yml中加入如下行后重启

    cat >>/opt/elasticsearch-2.3.2/config/elasticsearch.yml <<HERE

    discovery.zen.ping.timeout: 100s

    discovery.zen.fd.ping_timeout: 100s

    discovery.zen.ping.multicast.enabled: false

    discovery.zen.ping.unicast.hosts: ["192.168.8.10:9300","192.168.8.101:9300", "192.168.8.102:9300", "192.168.8.103:9300", "192.168.8.201:9300", "192.168.8.202:9300"]

    discovery.zen.minimum_master_nodes: 2

    gateway.recover_after_nodes: 2

    HERE


    This setting should be set to a quorum of master-eligible nodes:

    
    (master_eligible_nodes / 2) + 1
    

    In other words, if there are three master-eligible nodes, then minimum master nodes should be set to (3 / 2) + 1 or 2:

    
    discovery.zen.minimum_master_nodes: 2 
    

    Defaults to 1.

    This setting can also be changed dynamically on a live cluster with the cluster update settings API:

    
    PUT _cluster/settings
    {
      "transient": {
        "discovery.zen.minimum_master_nodes": 2
      }
    }
    
    Tip

    An advantage of splitting the master and data roles between dedicated nodes is that you can have just three master-eligible nodes and set minimum_master_nodes to 2. You never have to change this setting, no matter how many dedicated data nodes you add to the cluster.

    节点全部启动完成后,在任意节点上都可以查看集群状态

    [root@ela-master3 ~]# curl 'http://localhost:9200/_nodes/process?pretty'

    {

      "cluster_name" : "elasticsearch_cluster",

      "nodes" : {

        "RImoGoyDQue5PxKjjuGXJA" : {

          "name" : "ela-master2.example.com",

          "transport_address" : "192.168.8.102:9300",

          "host" : "192.168.8.102",

          "ip" : "192.168.8.102",

          "version" : "2.3.2",

          "build" : "b9e4a6a",

          "http_address" : "192.168.8.102:9200",

          "attributes" : {

            "data" : "false",

            "master" : "true"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 2103,

            "mlockall" : true

          }

        },

        "h_jajHU_Rea7EUjKtQUnDA" : {

          "name" : "ela-master3.example.com",

          "transport_address" : "192.168.8.103:9300",

          "host" : "192.168.8.103",

          "ip" : "192.168.8.103",

          "version" : "2.3.2",

          "build" : "b9e4a6a",

          "http_address" : "192.168.8.103:9200",

          "attributes" : {

            "data" : "false",

            "master" : "true"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 1073,

            "mlockall" : true

          }

        },

        "dAp0lnpVTPavfTuiabM-hg" : {

          "name" : "ela-data1.example.com",

          "transport_address" : "192.168.8.201:9300",

          "host" : "192.168.8.201",

          "ip" : "192.168.8.201",

          "version" : "2.3.2",

          "build" : "b9e4a6a", 

          "http_address" : "192.168.8.201:9200",

          "attributes" : {

            "master" : "false"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 984,

            "mlockall" : true

          }

        },

        "TMN4Rr72TTaWpPdve8H5dQ" : {

          "name" : "localhost.localdomain",

          "transport_address" : "192.168.8.202:9300",

          "host" : "192.168.8.202",

          "ip" : "192.168.8.202",

          "version" : "2.3.2",

          "build" : "b9e4a6a",

          "http_address" : "192.168.8.202:9200",

          "attributes" : {

            "master" : "false"

          },

          "process" : { 

            "refresh_interval_in_millis" : 1000,

            "id" : 1020,

            "mlockall" : true

          }

        },

        "gi3kX3IXTJiC0PdUf0KH1A" : {

          "name" : "ela-master1.example.com",

          "transport_address" : "192.168.8.101:9300",

          "host" : "192.168.8.101",

          "ip" : "192.168.8.101",

          "version" : "2.3.2",

          "build" : "b9e4a6a",

          "http_address" : "192.168.8.101:9200",

          "attributes" : { 

            "data" : "false",

            "master" : "true"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 1139,

            "mlockall" : true

          }

        },

        "PMmbEGuoTyCR54FbGt5bpg" : {

          "name" : "ela-client.example.com",

          "transport_address" : "192.168.8.10:9300",

          "host" : "192.168.8.10",

          "ip" : "192.168.8.10",

          "version" : "2.3.2",

          "build" : "b9e4a6a",

          "http_address" : "192.168.8.10:9200",

          "attributes" : {

            "data" : "false",

            "master" : "false"

          },

          "process" : {

            "refresh_interval_in_millis" : 1000,

            "id" : 986,

            "mlockall" : true

          }

        }

      }

    }

      

    日志中也有类似信息

    tail -f /opt/elasticsearch-2.3.2/logs/elasticsearch_cluster.log

    detected_master {ela-master2.example.com}{RImoGoyDQue5PxKjjuGXJA}{192.168.8.102}{192.168.8.102:9300}{data=false, master=true},

    added {{ela-master2.example.com}{RImoGoyDQue5PxKjjuGXJA}{192.168.8.102}{192.168.8.102:9300}{data=false, master=true},

    {ela-master3.example.com}{h_jajHU_Rea7EUjKtQUnDA}{192.168.8.103}{192.168.8.103:9300}{data=false, master=true},},

    reason: zen-disco-receive(from master [{ela-master2.example.com}{RImoGoyDQue5PxKjjuGXJA}{192.168.8.102}{192.168.8.102:9300}{data=false, master=true}])

    [2016-05-07 17:08:57,679][INFO ][cluster.service          ] [ela-master1.example.com] 

    added {{ela-client.example.com}{PMmbEGuoTyCR54FbGt5bpg}{192.168.8.10}{192.168.8.10:9300}{data=false, master=false},},

    reason: zen-disco-receive(from master [{ela-master2.example.com}{RImoGoyDQue5PxKjjuGXJA}{192.168.8.102}{192.168.8.102:9300}{data=false, master=true}])

    [2016-05-07 17:10:14,944][INFO ][cluster.service          ] [ela-master1.example.com] 

    added {{ela-data1.example.com}{dAp0lnpVTPavfTuiabM-hg}{192.168.8.201}{192.168.8.201:9300}{master=false},}, 

    reason: zen-disco-receive(from master [{ela-master2.example.com}{RImoGoyDQue5PxKjjuGXJA}{192.168.8.102}{192.168.8.102:9300}{data=false, master=true}])

    [2016-05-07 17:11:14,109][INFO ][cluster.service          ] [ela-master1.example.com] 

    added {{ela-data2.example.com}{TMN4Rr72TTaWpPdve8H5dQ}{192.168.8.202}{192.168.8.202:9300}{master=false},}, 

    reason: zen-disco-receive(from master [{ela-master2.example.com}{RImoGoyDQue5PxKjjuGXJA}{192.168.8.102}{192.168.8.102:9300}{data=false, master=true}])


    六.导入示例数据

    [root@ela-client ~]# curl 'localhost:9200/_cat/indices?v'

    health status index               pri rep docs.count docs.deleted store.size pri.store.size 

    green  open   shakespeare           5   1     111396            0     36.5mb         18.2mb 

    green  open   logstash-2015.05.20   5   1       4750            0     72.9mb         35.9mb 

    green  open   bank                  5   1       1000              890.5kb        442.6kb 

    green  open   .kibana               1                     0     47.9kb         25.2kb 

    green  open   logstash-2015.05.18   5   1       4631            0     64.9mb         34.7mb 

    green  open   logstash-2015.05.19   5   1       4624            0     66.7mb         35.3mb 


    [root@ela-client ~]# curl 'http://localhost:9200/_cat/health?v'

    epoch      timestamp cluster               status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 

    1462623617 20:20:17  elasticsearch_cluster green             6         2     52  26      0       32                             -                 100% 


    提示:通过client node导入可以实现数据分片负载,Kibana中可以指定client node类型的Elasticsearch节点来实现负载均衡,如
    elasticsearch.url: "http://192.168.8.10:9200"


    补充: 3“平型”节点集群
    elastic1.example.com:192.168.8.101(master & data node)
    elastic1.example.com:192.168.8.102(master & data node)
    elastic1.example.com:192.168.8.103(master & data node)

    保持默认节点类型(node.master:true,node.data:true)
    elastic1.example.com:192.168.8.101

    mv /opt/elasticsearch-2.3.2/config/elasticsearch.yml{,.default}

    cat >/opt/elasticsearch-2.3.2/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    path.data: /opt/elasticsearch-2.3.2/data

    path.logs: /opt/elasticsearch-2.3.2/logs

    bootstrap.mlockall: true

    network.host: 127.0.0.1,192.168.8.101

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    elastic2.example.com:192.168.8.102

    mv /opt/elasticsearch-2.3.2/config/elasticsearch.yml{,.default}

    cat >/opt/elasticsearch-2.3.2/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    path.data: /opt/elasticsearch-2.3.2/data

    path.logs: /opt/elasticsearch-2.3.2/logs

    bootstrap.mlockall: true

    network.host: 127.0.0.1,192.168.8.102

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    elastic3.example.com:192.168.8.103

    mv /opt/elasticsearch-2.3.2/config/elasticsearch.yml{,.default}

    cat >/opt/elasticsearch-2.3.2/config/elasticsearch.yml <<HERE

    cluster.name: elasticsearch_cluster

    node.name: ${HOSTNAME}

    path.data: /opt/elasticsearch-2.3.2/data

    path.logs: /opt/elasticsearch-2.3.2/logs

    bootstrap.mlockall: true

    network.host: 127.0.0.1,192.168.8.103

    http.port: 9200

    index.refresh_interval: 5s

    script.inline: true

    script.indexed: true

    HERE
    启动后因节点与节点之间没有通信,各自为政,启动后可能出现如下状态

    [root@elastic1 ~]# curl 'http://192.168.8.101:9200/_cat/health?v'

    epoch      timestamp cluster               status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 

    1462205047 00:04:07  elasticsearch_cluster green                                                                               100.0% 

    [root@elastic1 ~]# curl 'http://192.168.8.102:9200/_cat/health?v'

    epoch      timestamp cluster               status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 

    1462205053 00:04:13  elasticsearch_cluster green                                                                                   100.0% 

    [root@elastic1 ~]# curl 'http://192.168.8.103:9200/_cat/health?v'

    epoch      timestamp cluster               status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 

    1462205057 00:04:17  elasticsearch_cluster green                                                                                   100.0% 


    在以上节点的配置文件/opt/elasticsearch-2.3.2/config/elasticsearch.yml中加入如下行后重启

    cat >>/opt/elasticsearch-2.3.2/config/elasticsearch.yml <<HERE

    discovery.zen.ping.timeout: 100s

    discovery.zen.fd.ping_timeout: 100s

    discovery.zen.ping.multicast.enabled: false

    discovery.zen.ping.unicast.hosts: ["192.168.8.101:9300", "192.168.8.102:9300", "192.168.8.103:9300"]

    discovery.zen.minimum_master_nodes: 2 

    gateway.recover_after_nodes: 2

    HERE

    192.168.8.103重启后日志的一段,可以看到192.168.8.101被当作了master

    [2016-05-03 00:29:47,837][INFO ][env            ] [elastic3.example.com] heap size [990.7mb], compressed ordinary object pointers [true]

    [2016-05-03 00:29:49,165][INFO ][node           ] [elastic3.example.com] initialized

    [2016-05-03 00:29:49,165][INFO ][node           ] [elastic3.example.com] starting ...

    [2016-05-03 00:29:49,231][INFO ][transport      ] [elastic3.example.com] publish_address {192.168.8.103:9300}, bound_addresses {192.168.8.103:9300}

    [2016-05-03 00:29:49,236][INFO ][discovery      ] [elastic3.example.com] elasticsearch_cluster/eTXMSH77SLynRTNtKKXaHA

    [2016-05-03 00:29:52,357][INFO ][cluster.service] [elastic3.example.com] detected_master {elastic1.example.com}{VbXED9YbQ7qQGcSZ0TlKPQ}{192.168.8.101}{192.168.8.101:9300}, 

    added {{elastic2.example.com}{v_yQRvrYT9GHumJ7kIzpUQ}{192.168.8.102}{192.168.8.102:9300},{elastic1.example.com}{VbXED9YbQ7qQGcSZ0TlKPQ}{192.168.8.101}{192.168.8.101:9300},},

    reason: zen-disco-receive(from master [{elastic1.example.com}{VbXED9YbQ7qQGcSZ0TlKPQ}{192.168.8.101}{192.168.8.101:9300}])

    [2016-05-03 00:29:52,401][INFO ][http           ] [elastic3.example.com] publish_address {192.168.8.103:9200}, bound_addresses {192.168.8.103:9200}

    [2016-05-03 00:29:52,401][INFO ][node           ] [elastic3.example.com] started


    https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html

    [root@elastic3 ~]# curl 'http://192.168.8.101:9200/_cat/health?v'

    epoch      timestamp cluster               status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 

    1462207733 00:48:53  elasticsearch_cluster green           3         3     10             0                                           100.0% 

    [root@elastic3 ~]# curl 'http://192.168.8.102:9200/_cat/health?v'

    epoch      timestamp cluster               status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 

    1462207744 00:49:04  elasticsearch_cluster green           3         3     10             0                                           100.0% 

    [root@elastic3 ~]# curl 'http://192.168.8.103:9200/_cat/health?v'

    epoch      timestamp cluster               status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent 

    1462207751 00:49:11  elasticsearch_cluster green           3         3     10             0                                           100.0% 

    再看状态就对了

    [root@elastic3 ~]# curl-XGET 'http://192.168.8.101:9200/_cluster/health/192.168.8.*'

    {"cluster_name":"elasticsearch_cluster","status":"green","timed_out":false,"number_of_nodes":3,

    "number_of_data_nodes":3,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,

    "initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,

    "number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}



    至此,Elasticsearch集群搭建完成,更多集群相关的参数,请参看官方说明


    补充:

    https://github.com/mobz/elasticsearch-head

    https://www.elastic.co/guide/en/marvel/current/introduction.html

    head,Marvel等监控可视化插件有兴趣的朋友可以试下
    Elasticsearch <wbr>负载均衡集群
    Elasticsearch <wbr>负载均衡集群

  • 相关阅读:
    spring mvc给参数起别名
    聊聊分布式定时任务中间件架构及其实现--转
    Batch Normalization的算法本质是在网络每一层的输入前增加一层BN层(也即归一化层),对数据进行归一化处理,然后再进入网络下一层,但是BN并不是简单的对数据进行求归一化,而是引入了两个参数λ和β去进行数据重构
    终端安全工具 gartner 排名
    When Cyber Security Meets Machine Learning 机器学习 安全分析 对于安全领域的总结很有用 看未来演进方向
    DNS隧道之DNS2TCP实现——dns2tcpc必须带server IP才可以,此外ssh可以穿过墙的,设置代理上网
    DNS隧道之DNS2TCP使用心得教程——是可以用来穿透qiang的,ubuntu下直接apt install dns2tcp
    DNS隧道工具汇总——补充,还有IP over DNS的工具NSTX、Iodine、DNSCat
    Data Mining and Machine Learning in Cybersecurity PDF
    ES failed to notify ClusterStateListener java.lang.IllegalStateException: environment is not locked
  • 原文地址:https://www.cnblogs.com/lixuebin/p/10814098.html
Copyright © 2011-2022 走看看