zoukankan      html  css  js  c++  java
  • 安装Elasticsearch 7.4集群(开启集群Auth + Transport SSL)以及 Kibana & Keystore

          本身是基于CentOS 7的。

    • 通过tar包安装
    • Elasticsearch 7.4.2,默认集成JDK(Openjdk 13.0.1);这里简称ES
    • Kibana 7.4.2

    Elastic Stack License

    在新版的Elastic中,基础版(免费)的已经提供了基础的核心安全功能,可以在生产中部署,不再需要Nginx + Basic Auth代理了。

    参考官网,如下图:

    默认情况下Elastic中的安全功能是被禁用的,那么在本文中,就是采用基础版,自动申请Basic License的,然后分别开启Auth认证,以及Nodes间加密通信SSL。

    安装Elasticsearch

    我这里只有一台EC2主机,需要安装由三台Elasticsearch 组成的集群,所以采用使用不同端口的方式。

    IPHttp Port(ES HTTP API)Transport Port(ES集群内部通信使用)名称
    172.17.0.87 9200 9331 es01
    172.17.0.87 9201 9332 es02
    172.17.0.87 9202 9333 es03

    下载Elasticsearch

    $ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.2-linux-x86_64.tar.gz
    $ tar xf elasticsearch-7.4.2-linux-x86_64.tar.gz
     

    如果你只是想快速的搭建单台ES,用于测试,可以直接启动:

    $  cd elasticsearch-7.4.2
    $ ./bin/elasticsearch
     

    此时默认是以development 方式启动的,一些前提条件如果不符合其要求只会提示,但并不会无法启动,此时只会监听在127.0.0.1:9200上,只能用于测试;当你更改了``elasticsearch.yml配置文件中的network.host`参数时,就会以生产的方式启动。

    我们这采用生产的方式,也就是说他的前提依赖都必须满足,否则无法启动。

    目录结构

    官网参考

    TypeDescriptionDefault LocationSetting
    home Elasticsearch home directory or $ES_HOME Directory created by unpacking the archive ES_ HOME
    bin Binary scripts including elasticsearch to start a node and elasticsearch-plugin to install plugins $ES_HOME/bin  
    conf Configuration files including elasticsearch.yml $ES_HOME/config ES_PATH_CONF
    data The location of the data files of each index / shard allocated on the node. Can hold multiple locations. $ES_HOME/data path.data
    logs Log files location. $ES_HOME/logs path.logs
    plugins Plugin files location. Each plugin will be contained in a subdirectory. $ES_HOME/plugins  
    repo Shared file system repository locations. Can hold multiple locations. A file system repository can be placed in to any subdirectory of any directory specified here. Not configured path.repo
    script Location of script files. $ES_HOME/scripts path.scripts

    系统设置

    系统依赖参考

    ulimits

    编辑配置文件/etc/security/limits.conf,因为我这里使用默认的用户ec2-user来运行ES,所以这里的账号填ec2-user,你可以根据自己的情况填写,或者写成星号;

    # - nofile - max number of open file descriptors 最大打开的文件描述符数
    # - memlock - max locked-in-memory address space (KB) 最大内存锁定
    # - nproc - max number of processes 最大进程数
    $ vim /etc/security/limits.conf
    ec2-user - nofile 65535
    ec2-user - memlock unlimited
    ec2-user - nproc 4096

    # 然后退出重新登陆
     

    检查:

    $ ulimit -a
    core file size (blocks, -c) 0
    data seg size (kbytes, -d) unlimited
    scheduling priority (-e) 0
    file size (blocks, -f) unlimited
    pending signals (-i) 63465
    max locked memory (kbytes, -l) unlimited ## 这里已经生效
    max memory size (kbytes, -m) unlimited
    open files (-n) 65535 ## 这里已经生效
    pipe size (512 bytes, -p) 8
    POSIX message queues (bytes, -q) 819200
    real-time priority (-r) 0
    stack size (kbytes, -s) 8192
    cpu time (seconds, -t) unlimited
    max user processes (-u) 4096 ## 这里已经生效
    virtual memory (kbytes, -v) unlimited
    file locks (-x) unlimited
     

    禁用交换分区 swap

    执行命令以立刻禁用swap:

    $ sudo swapoff -a
     

    这里只是临时的禁用了,系统重启后还是会启动的,编辑以下配置文件将swap的挂载去掉:

    $ sudo vim /etc/fstab
     

    配置swappiness 以及虚拟内存

    这是减少了内核的交换趋势,并且在正常情况下不应该导致交换,同时仍然允许整个系统在紧急情况下交换。

    # 增加如下两行
    $ sudo vim /etc/sysctl.conf
    vm.swappiness=1
    vm.max_map_count=262144

    # 使之生效
    $ sudo sysctl -p
     

    开启ES的内存锁定:

    在ES的配置文件中config/elasticsearch.yml增加如下行:

    bootstrap.memory_lock: true
     

    ES集群的前提依赖已经配置完毕了,在真正的配置ES以及启动集群前,我们需要先明白一些概念,如下:

    [Elasticsearch 基础概念](#Elasticsearch 基础概念),[Elasticsearch Note说明](#Elasticsearch Note说明),以便我们能更好的配置ES集群:

    Elasticsearch 基础概念

    Cluster

    Elasticsearch 集群,由一台或多台的Elasticsearch 节点(Node)组成。

    Node

    Elasticsearch 节点,可以认为是Elasticsearch的服务进程,在同一台机器上启动两个Elasticsearch实例(进程),就是两个node节点。

    Index

    索引,具有相同结构的文档的集合,类似于关系型数据库的数据库实例(6.0.0版本type废弃后,索引的概念下降到等同于数据库表的级别)。一个集群中可以有多个索引。

    Type

    类型,在索引中内进行逻辑细分,在新版的Elasticsearch中已经废弃。

    Document

    文档,Elasticsearch中的最小的数据存储单元,JSON数据格式,很多相同结构的文档组成索引。文档类似于关系型数据库中表内的一行记录。

    Shard

    分片,单个索引切分成多个shard,分布在多台Node节点上存储。可以利用shard很好的横向扩展,以存储更多的数据,同时shard分布在多台node上,可以提升集群整体的吞吐量和性能。在创建索引的时候可以直接指定分片的数量即可,一旦指定就不能再修改了。

    Replica

    索引副本,完全拷贝shard的内容,一个shard可以有一个或者多个replica,replica就是shard的数据拷贝,以提高冗余。

    replica承担三个任务:

    • shard故障或者node宕机时,其中的一个replica可以升级成shard
    • replica保证数据不丢失,保证高可用
    • replica可以分担搜索请求,提高集群的吞吐和性能

    shard的全称叫primary shard,replica全称叫replica shard,primary shard数量在创建索引时指定,后期不能修改,replica shard后期可以修改。默认每个索引的primary shard值为5,replica shard值为1,含义是5个primary shard,5个replica shard,共10个shard。因此Elasticsearch最小的高可用配置是2台服务器。

    Elasticsearch Note 说明:

    参考官方

    在ES集群中的Note有如下几种类型:

    • Master-eligiblenode.master:true的节点,使其有资格呗选举为控制集群的主节点。主节点负责集群范围内的轻量级操作,例如创建或删除索引,跟踪哪些节点是集群的一部分以及确定将哪些碎片分配给哪些节点

    • datanode.data:true的节点,数据节点,保存数据并执行与数据有关的操作,例如CRUD(增删改查),搜索和聚合。

    • ingestnode.ingest:true的节点,能够将管道(Pipeline)应用于文档,以便在建立所以之前转换和丰富文档。

    • machine-learningxpack.ml.enabled and node.ml set to true ,适用于x-pack版本,OSS版本不能增加,否则无法启动。

    • coordinating node: 协调节点,诸如搜索请求或批量索引请求之类的请求可能涉及保存在不同数据节点上的数据。例如,搜索请求在两个阶段中执行,由接收客户端请求的节点(协调节点)进行协调

      分散阶段,协调节点将请求转发到保存数据的数据节点。每个数据节点在本地执行该请求,并将其结果返回给协调节点。在收集 阶段,协调节点将每个数据节点的结果缩减为单个全局结果集。

      每个节点都隐式地是一个协调节点。这意味着,有三个节点node.masternode.datanode.ingest都设置为false只充当一个协调节点,不能被禁用。结果,这样的节点需要具有足够的内存和CPU才能处理收集阶段。

    ingest
    英 /ɪnˈdʒest/ 美 /ɪnˈdʒest/ 全球(美国)
    vt. 摄取;咽下;吸收;接待
    过去式 ingested过去分词 ingested现在分词 ingesting第三人称单数 ingests

    coordinating
    英 /kəʊˈɔːdɪneɪtɪŋ/ 美 /koˈɔrdɪnetɪŋ/ 全球(英国)
    v. (使)协调;协同动作;(衣服等)搭配;调节,协调;配合;与……形成共价键(coordinate 的现在分词)
    adj. 协调的;并列的;同位的;对等的

    默认值:

    • node.master: ture
    • node.voting_only: false
    • node.data: true
    • node.ml: true
    • xpack.ml.enabled: true
    • cluster.remote.connect: false

    Master-eligible,合格主节点,主合格节点

    主节点负责集群范围内的轻量级操作,例如创建或删除索引,跟踪哪些节点是集群的一部分以及确定将哪些碎片分配给哪些节点。拥有稳定的主节点对于群集健康非常重要。

    可以通过主选举过程来选举不是仅投票节点的任何符合主资格的节点成为主节点。

    索引和搜索数据是占用大量CPU,内存和I / O的工作,这可能会对节点的资源造成压力。为确保您的主节点稳定且不受压力,在较大的群集中,最好将符合角色的专用主节点和专用数据节点分开。

    虽然主节点也可以充当协调节点, 并将搜索和索引请求从客户端路由到数据节点,但最好不要为此目的使用专用的主节点。对于符合主机要求的节点,其工作量应尽可能少,这对于群集的稳定性很重要。

    设置节点成为主合格节点:

    node.master: true 
    node.voting_only: false
    node.data: false
    node.ingest: false
    node.ml: false
    xpack.ml.enabled: true
    cluster.remote.connect: false
     

    对于OSS版本:

    node.master: true 
    node.data: false
    node.ingest: false
    cluster.remote.connect: false
     

    仅投票节点

    是参与投票过程,但是不能成为主节点的节点,只投票节点在选举中充当决胜局。

    设置节点成为仅投票节点:

    node.master: true 
    node.voting_only: true
    node.data: false
    node.ingest: false
    node.ml: false
    xpack.ml.enabled: true
    cluster.remote.connect: false
     

    注意:

    • OSS版本不支持这个参数,如果设置了,将无法启动。

    • 只有符合主机资格的节点才能标记为仅投票。

    高可用性(HA)群集至少需要三个主节点,其中至少两个不是仅投票节点,可以将另一个节点设置成仅投票节点。这样,即使其中一个节点发生故障,这样的群集也将能够选举一个主节点。

    数据节点

    数据节点包含包含您已建立索引的文档的分片。数据节点处理与数据相关的操作,例如CRUD,搜索和聚合。这些操作是I / O,内存和CPU密集型的。监视这些资源并在过载时添加更多数据节点非常重要。

    具有专用数据节点的主要好处是将主角色和数据角色分开。

    要在默认分发中创建专用数据节点,请设置:

    node.master: false 
    node.voting_only: false
    node.data: true
    node.ingest: false
    node.ml: false
    cluster.remote.connect: false
     

    对于OSS版本的设置:

    node.master: false 
    node.data: true
    node.ingest: false
    cluster.remote.connect: false
     

    Ingest 节点

    接收节点可以执行由一个或多个接收处理器组成的预处理管道。根据摄取处理器执行的操作类型和所需的资源,拥有专用的摄取节点可能有意义,该节点仅执行此特定任务。

    要在默认分发中创建专用的摄取节点,请设置:

    node.master: false 
    node.voting_only: false
    node.data: false
    node.ingest: true
    node.ml: false
    cluster.remote.connect: false
     

    在OSS上设置:

    node.master: false 
    node.data: false
    node.ingest: true
    cluster.remote.connect: false
     

    仅协调节点

    如果您不具备处理主要职责,保存数据和预处理文档的能力,那么您将拥有一个仅可路由请求,处理搜索缩减阶段并分配批量索引的协调节点。本质上,仅协调节点充当智能负载平衡器。

    仅协调节点可以通过从数据和符合资格的主节点上卸载协调节点角色来使大型集群受益。他们像其他节点一样加入集群并接收完整的集群状态,并且使用集群状态将请求直接路由到适当的位置。

    在集群中添加过多的仅协调节点会增加整个集群的负担,因为选择的主节点必须等待每个节点的集群状态更新确认!仅协调节点的好处不应被夸大-数据节点也可以很好地达到相同的目的。

    设置仅协调节点:

    node.master: false 
    node.voting_only: false
    node.data: false
    node.ingest: false
    node.ml: false
    cluster.remote.connect: false
     

    在OSS上设置:

    node.master: false 
    node.data: false
    node.ingest: false
    cluster.remote.connect: false
     

    机器学习节点

    机器学习功能提供了机器学习节点,该节点运行作业并处理机器学习API请求。如果xpack.ml.enabled设置为true且node.ml设置为false,则该节点可以处理API请求,但不能运行作业。

    如果要在群集中使用机器学习功能,则必须在所有符合主机资格的节点上启用机器学习(设置xpack.ml.enabledtrue)。如果您只有OSS发行版,请不要使用这些设置。

    有关这些设置的更多信息,请参阅机器学习设置

    要在默认分发中创建专用的机器学习节点,请设置:

    node.master: false 
    node.voting_only: false
    node.data: false
    node.ingest: false
    node.ml: true
    xpack.ml.enabled: true
    cluster.remote.connect: false
     

    配置Elasticsearch

    拷贝三台ES目录:

    $ ls
    elasticsearch-7.4.2
    $ mv elasticsearch-7.4.2{,-01}
    $ ls
    elasticsearch-7.4.2-01
    $ cp -a elasticsearch-7.4.2-01 elasticsearch-7.4.2-02
    $ cp -a elasticsearch-7.4.2-01 elasticsearch-7.4.2-03
    $ ln -s elasticsearch-7.4.2-01 es01
    $ ln -s elasticsearch-7.4.2-02 es02
    $ ln -s elasticsearch-7.4.2-03 es03
    $ ll
    total 0
    drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-01
    drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-02
    drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-03
    lrwxrwxrwx 1 ec2-user ec2-user 22 Nov 26 15:00 es01 -> elasticsearch-7.4.2-01
    lrwxrwxrwx 1 ec2-user ec2-user 22 Nov 26 15:00 es02 -> elasticsearch-7.4.2-02
    lrwxrwxrwx 1 ec2-user ec2-user 22 Nov 26 15:00 es03 -> elasticsearch-7.4.2-03
     

    配置Elasticsearch 名称解析

    我这里直接使用hosts文件:

    cat >> /etc/hosts <<EOF
    172.17.0.87 es01 es02 es03
    EOF
     

    编辑ES配置文件config/elasticsearch.yml

    默认的配置文件在$ES_HOME/config/elasticsearch.yml 中,配置文件是以yaml的格式配置,其中有三种配置方式:

    path:
    data: /var/lib/elasticsearch
    logs: /var/log/elasticsearch
     

    或者写成单行的格式:

    path.data: /var/lib/elasticsearch
    path.logs: /var/log/elasticsearch
     

    再或者通过环境变量的方式:这种方式在Docker,Kubernetes环境中很有用。

    node.name:    ${HOSTNAME}
    network.host: ${ES_NETWORK_HOST}
     

    下面详解一下各个配置的信息,然后提供各个ES的elasticsearch.yml的配置文件:

    Elasticsearch 配置详解

    配置ES 的PATH路径,path.data & path.logs

    如果不配置默认是$ES_HOME中的子目录datalogs

    path:
    logs: /var/log/elasticsearch
    data: /var/data/elasticsearch
     

    path.data,可以设置多个目录:

    path:
    logs: /data/ES01/logs
    data:
    - /data/ES01-A
    - /data/ES01-B
    - /data/ES01-C
     
    配置ES集群名称:cluster.name

    一个node节点只能加入一个集群当中,不同的节点配置同一个cluster.name可以组成ES集群。请确保不同的cluster集群中使用不同的cluster.name

    cluster.name: logging-prod
     
    配置ES节点名称:node.name

    node.name代表节点名称,是人类可读用于区分node节点;如果不配置,默认是主机名

    node.name: prod-data-002
     
    配置ES节点监听地址:network.host

    如果不配置,默认是监听在127.0.0.1 和 [::1],同时以development的方式启动。

    # 监听在指定IP上
    network.host: 192.168.1.10

    # 监听在所有的IP上
    network.host: 0.0.0.0
     

    network.host 可用的配置:

    _[networkInterface]_Addresses of a network interface, for example _eth0_. 指定网卡
    _local_ Any loopback addresses on the system, for example 127.0.0.1. 本地回环IP
    _site_ Any site-local addresses on the system, for example 192.168.0.1. 内网IP
    _global_ Any globally-scoped addresses on the system, for example 8.8.8.8. 公网IP
    配置ES节点的发现和集群组成设置

    这里主要有两个主要的配置:发现和集群组成设置,集群间的node节点可以实现彼此发现、选举主节点,从而组成ES集群。

    discovery.seed_hosts

    如果不配置,默认ES在启动的时候会监听本地回环地址,同时会扫描本地端口:9300-9305,用于发现在本机启动的其他节点。

    所以如果不进行的任何配置,将$ES_HOME目录拷贝三份,然后全部启动,默认也是可以组成ES集群的,用于测试使用。 如果你需要在多台机器上启动ES节点,以便组成集群,那么这个参数必须配置,以便nodes之间能够发现彼此。

    discovery.seed_hosts是一个列表,多个元素用逗号隔开,元素可以写成:

    • host:port,指定自定义的transport集群间通信端口
    • host,使用默认的transport集群间通信端口:9300-9400;参考
    • 域名,可以解析成多个IP,会自动的与每个解析到的IP去连接测试
    • 其他自定义可以解析的名称
    cluster.initial_master_nodes

    在deveplopment模式中是一台主机上自动发现的nodes彼此之间自动配置的。但是在生产的模式中必须要配置。

    这个参数用于在新的集群第一次启动的时使用,以指定可以参与选举合格主节点列表(node.master: true)。在集群重启或者增加新节点的时候这个参数不起作用,因为在每个node节点上都已经保存有集群的状态信息。

    cluster.initial_master_nodes也是一个列表,多个元素用逗号隔开,元素可以写成:参考

    • 配置的node.name名称。
    • 如果没有配置node.name,那么使用完整主机名
    • FQDN
    • host,如果没有配置node.name,使用network.host配置的公开地址
    • host:port 如果没有配置node.name,这里的端口是transport端口
    ES节点http和transport的配置

    http 和 transport

    配置参考:httptransport

    http用于暴露Elasticsearch的API,便于client端与ES通信;transport用于ES集群间节点通信使用。

    http 配置参考:

    SettingDescription
    http.port http端口配置 A bind port range. Defaults to 9200-9300.
    http.publish_port The port that HTTP clients should use when communicating with this node. Useful when a cluster node is behind a proxy or firewall and the http.port is not directly addressable from the outside. Defaults to the actual port assigned via http.port.
    http.bind_host http监听的IP The host address to bind the HTTP service to. Defaults to http.host (if set) or network.bind_host.
    http.publish_host The host address to publish for HTTP clients to connect to. Defaults to http.host (if set) or network.publish_host.
    http.host Used to set the http.bind_host and the http.publish_host.
    http.max_content_length The max content of an HTTP request. Defaults to 100mb.
    http.max_initial_line_length The max length of an HTTP URL. Defaults to 4kb
    http.max_header_size The max size of allowed headers. Defaults to 8kB
    http.compression 压缩 Support for compression when possible (with Accept-Encoding). Defaults to true.
    http.compression_level 压缩级别 Defines the compression level to use for HTTP responses. Valid values are in the range of 1 (minimum compression) and 9 (maximum compression). Defaults to 3.
    http.cors.enabled 跨域配置 Enable or disable cross-origin resource sharing, i.e. whether a browser on another origin can execute requests against Elasticsearch. Set to true to enable Elasticsearch to process pre-flight CORS requests. Elasticsearch will respond to those requests with the Access-Control-Allow-Origin header if the Origin sent in the request is permitted by the http.cors.allow-origin list. Set to false (the default) to make Elasticsearch ignore the Origin request header, effectively disabling CORS requests because Elasticsearch will never respond with the Access-Control-Allow-Origin response header. Note that if the client does not send a pre-flight request with an Origin header or it does not check the response headers from the server to validate the Access-Control-Allow-Origin response header, then cross-origin security is compromised. If CORS is not enabled on Elasticsearch, the only way for the client to know is to send a pre-flight request and realize the required response headers are missing.
    http.cors.allow-origin Which origins to allow. Defaults to no origins allowed. If you prepend and append a / to the value, this will be treated as a regular expression, allowing you to support HTTP and HTTPs. for example using /https?://localhost(:[0-9]+)?/ would return the request header appropriately in both cases. * is a valid value but is considered a security risk as your Elasticsearch instance is open to cross origin requests from anywhere.
    http.cors.max-age Browsers send a “preflight” OPTIONS-request to determine CORS settings. max-age defines how long the result should be cached for. Defaults to 1728000 (20 days)
    http.cors.allow-methods Which methods to allow. Defaults to OPTIONS, HEAD, GET, POST, PUT, DELETE.
    http.cors.allow-headers Which headers to allow. Defaults to X-Requested-With, Content-Type, Content-Length.
    http.cors.allow-credentials Whether the Access-Control-Allow-Credentials header should be returned. Note: This header is only returned, when the setting is set to true. Defaults to false
    http.detailed_errors.enabled Enables or disables the output of detailed error messages and stack traces in response output. Note: When set to false and the error_trace request parameter is specified, an error will be returned; when error_trace is not specified, a simple message will be returned. Defaults to true
    http.pipelining.max_events The maximum number of events to be queued up in memory before an HTTP connection is closed, defaults to 10000.
    http.max_warning_header_count The maximum number of warning headers in client HTTP responses, defaults to unbounded.
    http.max_warning_header_size The maximum total size of warning headers in client HTTP responses, defaults to unbounded.

    transport 配置参考:

    SettingDescription
    transport.port transport端口 A bind port range. Defaults to 9300-9400.
    transport.publish_port The port that other nodes in the cluster should use when communicating with this node. Useful when a cluster node is behind a proxy or firewall and the transport.port is not directly addressable from the outside. Defaults to the actual port assigned via transport.port.
    transport.bind_host transport监听的IP The host address to bind the transport service to. Defaults to transport.host (if set) or network.bind_host.
    transport.publish_host The host address to publish for nodes in the cluster to connect to. Defaults to transport.host (if set) or network.publish_host.
    transport.host Used to set the transport.bind_host and the transport.publish_host.
    transport.connect_timeout The connect timeout for initiating a new connection (in time setting format). Defaults to 30s.
    transport.compress Set to true to enable compression (DEFLATE) between all nodes. Defaults to false.
    transport.ping_schedule Schedule a regular application-level ping message to ensure that transport connections between nodes are kept alive. Defaults to 5s in the transport client and -1 (disabled) elsewhere. It is preferable to correctly configure TCP keep-alives instead of using this feature, because TCP keep-alives apply to all kinds of long-lived connections and not just to transport connections.
    配置ES节点的JVM设置

    默认的JVM配置文件是:$ES_HOME/config/jvm.options

    # 配置内存占用最大最小都为1G。
    $ vim jvm.options
    -Xms1g
    -Xmx1g
     

    注意:

    生产环境,请根据实际情况进行设置。同时不同的角色需要设置不同的资源大小。

    建议不要超过32GB,如果有足够的内存建议配置在26G-30G。参考

    此时的JVM也可以通过环境变量的方式设置:

    $ export ES_JAVA_OPTS="-Xms1g -Xmx1g $ES_JAVA_OPTS" ./bin/elasticsearch
     

    了解了ES的配置,下面给出本次使用的配置样例:

    说明:

    • node.attr.xxx: yyy 用于设定这台node节点的属性,比如机架,可用区,或者以后可以设置冷热数据的分别存储都是基于这个。
    • 因为我的环境中只用了一台主机,所以采用了区分端口的方式。分别配置了http.porttransport.tcp.port
    • 我这里的服务发现使用的是自定义可解析名称,通过在/etc/hosts 指定解析完成的,方便后期更换IP地址。
    • 我这里的三台node节点,在初次启动时都可以竞选主节点,生产环境要注意选择合格主节点``node.master: true`

    es01

    $ cat es01/config/elasticsearch.yml |grep -Ev "^$|^#"
    cluster.name: es-cluster01
    node.name: es01
    node.attr.rack: r1
    node.attr.zone: A
    bootstrap.memory_lock: true
    network.host: 0.0.0.0
    http.port: 9200
    transport.tcp.port: 9331
    discovery.seed_hosts: ["es02:9332", "es03:9333"]
    cluster.initial_master_nodes: ["es01", "es02", "es03"]
     

    es02

    $ cat es02/config/elasticsearch.yml |grep -Ev "^$|^#"
    cluster.name: es-cluster01
    node.name: es02
    node.attr.rack: r1
    node.attr.zone: B
    bootstrap.memory_lock: true
    network.host: 0.0.0.0
    http.port: 9201
    transport.tcp.port: 9332
    discovery.seed_hosts: ["es01:9331", "es03:9333"]
    cluster.initial_master_nodes: ["es01", "es02", "es03"]
     

    es03

    $ cat es03/config/elasticsearch.yml |grep -Ev "^$|^#"
    cluster.name: es-cluster01
    node.name: es03
    node.attr.rack: r1
    node.attr.zone: C
    bootstrap.memory_lock: true
    network.host: 0.0.0.0
    http.port: 9202
    transport.tcp.port: 9333
    discovery.seed_hosts: ["es02:9332", "es01:9331"]
    cluster.initial_master_nodes: ["es01", "es02", "es03"]
     

    配置完Elasticsearch后,下面就是启动测试了。

    启动Elasticsearch

    首先查看一下Elasticsearch的命令帮助:

    $ ./es01/bin/elasticsearch --help
    OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
    starts elasticsearch

    Option Description
    ------ -----------
    -E <KeyValuePair> Configure a setting
    -V, --version Prints elasticsearch version information and exits
    -d, --daemonize Starts Elasticsearch in the background # 后台启动
    -h, --help show help
    -p, --pidfile <Path> Creates a pid file in the specified path on start # 指定pid文件
    -q, --quiet Turns off standard output/error streams logging in console # 安静的方式
    -s, --silent show minimal output
    -v, --verbose show verbose output
     

    分别启动三台ES:

    $ ll
    total 0
    drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-01
    drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-02
    drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-03
    lrwxrwxrwx 1 ec2-user ec2-user 22 Nov 26 15:00 es01 -> elasticsearch-7.4.2-01
    lrwxrwxrwx 1 ec2-user ec2-user 22 Nov 26 15:00 es02 -> elasticsearch-7.4.2-02
    lrwxrwxrwx 1 ec2-user ec2-user 22 Nov 26 15:00 es03 -> elasticsearch-7.4.2-03

    $ ./es01/bin/elasticsearch &
    $ ./es02/bin/elasticsearch &
    $ ./es03/bin/elasticsearch &
     

    可以通过在$ES_HOME/logs/<CLUSTER_NAME>.log 查看日志。

    测试,我们来查看一下集群中的节点:

    $ curl localhost:9200/_cat/nodes?v
    ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
    172.17.0.87 32 92 15 0.01 0.04 0.17 dilm - es03
    172.17.0.87 17 92 15 0.01 0.04 0.17 dilm * es02
    172.17.0.87 20 92 15 0.01 0.04 0.17 dilm - es01
     

    查看集群的健康状况:

    分为三种状态:

    • green,绿色,代表所有数据都健康。
    • yellow,黄色,代表数据部分正常,但是没有数据丢失,可以恢复到green。
    • red,红色,代表有数据丢失,且无法恢复了。
    $ curl localhost:9200
    {
    "name" : "es01", # 当前节点名称
    "cluster_name" : "es-cluster01", # 集群名称
    "cluster_uuid" : "n7DDNexcTDik5mU9Y_qrcA",
    "version" : { # 版本
    "number" : "7.4.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
    "build_date" : "2019-10-28T20:40:44.881551Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
    },
    "tagline" : "You Know, for Search"
    }

    $ curl localhost:9200/_cat/health
    1574835925 06:25:25 es-cluster01 green 3 3 0 0 0 0 0 0 - 100.0%

    $ curl localhost:9200/_cat/health?v
    epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
    1574835928 06:25:28 es-cluster01 green 3 3 0 0 0 0 0 0 - 100.0%
     

    查看所有/_cat接口:

    $ curl localhost:9200/_cat
    =^.^=
    /_cat/allocation
    /_cat/shards
    /_cat/shards/{index}
    /_cat/master
    /_cat/nodes
    /_cat/tasks
    /_cat/indices
    /_cat/indices/{index}
    /_cat/segments
    /_cat/segments/{index}
    /_cat/count
    /_cat/count/{index}
    /_cat/recovery
    /_cat/recovery/{index}
    /_cat/health
    /_cat/pending_tasks
    /_cat/aliases
    /_cat/aliases/{alias}
    /_cat/thread_pool
    /_cat/thread_pool/{thread_pools}
    /_cat/plugins
    /_cat/fielddata
    /_cat/fielddata/{fields}
    /_cat/nodeattrs
    /_cat/repositories
    /_cat/snapshots/{repository}
    /_cat/templates
     

    查看我们之前给每台机器定义的属性:

    $ curl localhost:9200/_cat/nodeattrs
    es03 172.17.0.87 172.17.0.87 ml.machine_memory 16673112064
    es03 172.17.0.87 172.17.0.87 rack r1 # 自定义的
    es03 172.17.0.87 172.17.0.87 ml.max_open_jobs 20
    es03 172.17.0.87 172.17.0.87 xpack.installed true
    es03 172.17.0.87 172.17.0.87 zone C # 自定义的
    es02 172.17.0.87 172.17.0.87 ml.machine_memory 16673112064
    es02 172.17.0.87 172.17.0.87 rack r1 # 自定义的
    es02 172.17.0.87 172.17.0.87 ml.max_open_jobs 20
    es02 172.17.0.87 172.17.0.87 xpack.installed true
    es02 172.17.0.87 172.17.0.87 zone B # 自定义的
    es01 172.17.0.87 172.17.0.87 ml.machine_memory 16673112064
    es01 172.17.0.87 172.17.0.87 rack r1 # 自定义的
    es01 172.17.0.87 172.17.0.87 ml.max_open_jobs 20
    es01 172.17.0.87 172.17.0.87 xpack.installed true
    es01 172.17.0.87 172.17.0.87 zone A # 自定义的
     

    我们发现,所有的这些API接口都是能够直接访问的,不需要任何的认证的,对于生产来说非常的不安全,同时任一台node节点都可以加入到集群中,这些都非常的不安全;下面介绍如果开启auth以及node间的ssl认证。

    开启ES集群的Auth认证和Node间SSL

    开启ES集群的Auth认证

    在最新版的ES中,已经开源了X-pack组件,但是开源 != 免费,但是一些基础的安全是免费的,例如本例中的Auth以及Node间SSL就是免费的。

    首先我们尝试生成密码:命令是$ES_HOME/bin/elasticsearch-setup-passwords,查看一下帮助:

    $ ./es01/bin/elasticsearch-setup-passwords --help
    Sets the passwords for reserved users

    Commands
    --------
    auto - Uses randomly generated passwords
    interactive - Uses passwords entered by a user

    Non-option arguments:
    command

    Option Description
    ------ -----------
    -h, --help show help
    -s, --silent show minimal output
    -v, --verbose show verbose output

    # 自动生成密码,发现失败
    $ ./es01/bin/elasticsearch-setup-passwords auto

    Unexpected response code [500] from calling GET http://172.17.0.87:9200/_security/_authenticate?pretty
    It doesn't look like the X-Pack security feature is enabled on this Elasticsearch node.
    Please check if you have enabled X-Pack security in your elasticsearch.yml configuration file.

    ERROR: X-Pack Security is disabled by configuration.
     

    我们查看一些ES01的日志,发现有报错:

    [2019-11-27T14:35:13,391][WARN ][r.suppressed             ] [es01] path: /_security/_authenticate, params: {pretty=}
    org.elasticsearch.ElasticsearchException: Security must be explicitly enabled when using a [basic] license. Enable security by setting [xpack.security.enabled] to [true] in the elasticsearch.yml file and restart the node.
    ......
     

    提示说需要先开启安全:

    我们按照提示分别的三台ES节点上添加如下信息:

    $ echo "xpack.security.enabled: true" >> es01/config/elasticsearch.yml
    $ echo "xpack.security.enabled: true" >> es02/config/elasticsearch.yml
    $ echo "xpack.security.enabled: true" >> es03/config/elasticsearch.yml
     

    然后重启:

    $ ps -ef|grep elasticsearch
    # 获取到es节点的pid分别kill即可,注意不要用-9
     

    发现无法启动,错误提示:

    ERROR: [1] bootstrap checks failed
    [1]: Transport SSL must be enabled if security is enabled on a [basic] license. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]
     

    好吧我们再添加这条配置:

    $ echo "xpack.security.transport.ssl.enabled: true" >> es01/config/elasticsearch.yml
    $ echo "xpack.security.transport.ssl.enabled: true" >> es02/config/elasticsearch.yml
    $ echo "xpack.security.transport.ssl.enabled: true" >> es03/config/elasticsearch.yml
     

    然后再次启动,我们又发现,在启动第二台的时候,两个es节点都一直报错,如下:

    [2019-11-27T14:50:58,643][WARN ][o.e.t.TcpTransport       ] [es01] exception caught on transport layer [Netty4TcpChannel{localAddress=/172.17.0.87:9331, remoteAddress=/172.17.0.87:56654}], closing connection
    io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: No available authentication scheme
    4at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:475) ~[netty-codec-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:283) ~[netty-codec-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1421) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:697) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:597) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:551) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918) [netty-common-4.1.38.Final.jar:4.1.38.Final]
    4at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.38.Final.jar:4.1.38.Final]
    4at java.lang.Thread.run(Thread.java:830) [?:?]
    Caused by: javax.net.ssl.SSLHandshakeException: No available authentication scheme
    4at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
    ......
     

    发现没有配置认证的方式。好吧,我们先往下继续配置:

    配置Node间SSL

    注意:这里是指配置ES集群节点间transport的SSL认证,对于ES节点的HTTP API接口并没有配置,所以通过API访问ES时不需要提供证书。

    参考官网:

    https://www.elastic.co/guide/en/elasticsearch/reference/current/ssl-tls.html

    https://www.elastic.co/guide/en/elasticsearch/reference/7.4/configuring-tls.html

    创建SSL/TLS证书:通过命令$ES_HOME/bin/elasticsearch-certutil

    # 查看命令帮助
    $ ./es01/bin/elasticsearch-certutil --help
    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by org.bouncycastle.jcajce.provider.drbg.DRBG (file:/opt/elk74/elasticsearch-7.4.2-01/lib/tools/security-cli/bcprov-jdk15on-1.61.jar) to constructor sun.security.provider.Sun()
    WARNING: Please consider reporting this to the maintainers of org.bouncycastle.jcajce.provider.drbg.DRBG
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    Simplifies certificate creation for use with the Elastic Stack

    Commands
    --------
    csr - generate certificate signing requests
    cert - generate X.509 certificates and keys
    ca - generate a new local certificate authority

    Non-option arguments:
    command

    Option Description
    ------ -----------
    -h, --help show help
    -s, --silent show minimal output
    -v, --verbose show verbose output
     

    创建CA证书:

    # 命令帮助:
    $ ./bin/elasticsearch-certutil ca --help
    generate a new local certificate authority

    Option Description
    ------ -----------
    -E <KeyValuePair> Configure a setting
    --ca-dn distinguished name to use for the generated ca. defaults
    to CN=Elastic Certificate Tool Autogenerated CA
    --days <Integer> number of days that the generated certificates are valid
    -h, --help show help
    --keysize <Integer> size in bits of RSA keys
    --out path to the output file that should be produced
    --pass password for generated private keys
    --pem output certificates and keys in PEM format instead of PKCS#12 ## 默认创建PKCS#12格式的,使用--pem可以创建pem格式的,key,crt,ca分开的。
    -s, --silent show minimal output
    -v, --verbose show verbose output

    # 创建ca证书
    $ ./es01/bin/elasticsearch-certutil ca -v
    This tool assists you in the generation of X.509 certificates and certificate
    signing requests for use with SSL/TLS in the Elastic stack.

    The 'ca' mode generates a new 'certificate authority'
    This will create a new X.509 certificate and private key that can be used
    to sign certificate when running in 'cert' mode.

    Use the 'ca-dn' option if you wish to configure the 'distinguished name'
    of the certificate authority

    By default the 'ca' mode produces a single PKCS#12 output file which holds:
    * The CA certificate
    * The CA's private key

    If you elect to generate PEM format certificates (the -pem option), then the output will
    be a zip file containing individual files for the CA certificate and private key

    Please enter the desired output file [elastic-stack-ca.p12]: # 输入保存的ca文件名称
    Enter password for elastic-stack-ca.p12 : # 输入证书密码,我们这里留空

    # 默认的CA证书存放在$ES_HOME 目录中
    $ ll es01/
    total 560
    drwxr-xr-x 2 ec2-user ec2-user 4096 Oct 29 04:45 bin
    drwxr-xr-x 2 ec2-user ec2-user 178 Nov 27 13:45 config
    drwxrwxr-x 3 ec2-user ec2-user 19 Nov 27 13:46 data
    -rw------- 1 ec2-user ec2-user 2527 Nov 27 15:05 elastic-stack-ca.p12 # 这里呢
    drwxr-xr-x 9 ec2-user ec2-user 107 Oct 29 04:45 jdk
    drwxr-xr-x 3 ec2-user ec2-user 4096 Oct 29 04:45 lib
    -rw-r--r-- 1 ec2-user ec2-user 13675 Oct 29 04:38 LICENSE.txt
    drwxr-xr-x 2 ec2-user ec2-user 4096 Nov 27 14:48 logs
    drwxr-xr-x 37 ec2-user ec2-user 4096 Oct 29 04:45 modules
    -rw-r--r-- 1 ec2-user ec2-user 523209 Oct 29 04:45 NOTICE.txt
    drwxr-xr-x 2 ec2-user ec2-user 6 Oct 29 04:45 plugins
    -rw-r--r-- 1 ec2-user ec2-user 8500 Oct 29 04:38 README.textile
     

    这个命令生成格式为PKCS#12名称为 elastic-stack-ca.p12 的keystore文件,包含CA证书和私钥。

    创建节点间认证用的证书:

    # 命令帮助:
    $ ./bin/elasticsearch-certutil cert --help
    generate X.509 certificates and keys

    Option Description
    ------ -----------
    -E <KeyValuePair> Configure a setting
    --ca path to an existing ca key pair (in PKCS#12 format)
    --ca-cert path to an existing ca certificate
    --ca-dn distinguished name to use for the generated ca. defaults
    to CN=Elastic Certificate Tool Autogenerated CA
    --ca-key path to an existing ca private key
    --ca-pass password for an existing ca private key or the generated
    ca private key
    --days <Integer> number of days that the generated certificates are valid
    --dns comma separated DNS names # 指定dns,域名
    -h, --help show help
    --in file containing details of the instances in yaml format
    --ip comma separated IP addresses # 指定IP
    --keep-ca-key retain the CA private key for future use
    --keysize <Integer> size in bits of RSA keys
    --multiple generate files for multiple instances
    --name name of the generated certificate
    --out path to the output file that should be produced
    --pass password for generated private keys
    --pem output certificates and keys in PEM format instead of
    PKCS#12
    -s, --silent show minimal output
    -v, --verbose show verbose output

    # 创建node证书
    $ cd es01
    $ ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
    This tool assists you in the generation of X.509 certificates and certificate
    signing requests for use with SSL/TLS in the Elastic stack.

    The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
    on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
    instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
    the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires an SSL certificate.
    Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
    may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
    hostname, which will be used as the Common Name of the certificate. A full
    distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
    name would result in an invalid file or directory name. The name provided here
    is used as the directory name (within the zip) and the prefix for the key and
    certificate files. The filename is required if you are prompted and the name
    is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
    comma separated string. If no IP addresses or DNS names are provided, you may
    disable hostname verification in your SSL configuration.

    * All certificates generated by this tool will be signed by a certificate authority (CA).
    * The tool can automatically generate a new CA for you, or you can provide your own with the
    -ca or -ca-cert command line options.

    By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

    If you specify any of the following options:
    * -pem (PEM formatted output)
    * -keep-ca-key (retain generated CA key)
    * -multiple (generate multiple certificates)
    * -in (generate certificates from an input file)
    then the output will be be a zip file containing individual certificate/key files

    Enter password for CA (elastic-stack-ca.p12) : # 输入CA证书的密码,我们这里没有设置,直接回车
    Please enter the desired output file [elastic-certificates.p12]: # 输入证书保存名称,保值默认直接回车
    Enter password for elastic-certificates.p12 : # 输入证书的密码,留空,直接回车

    Certificates written to /opt/elk74/elasticsearch-7.4.2-01/elastic-certificates.p12 # 存放位置

    This file should be properly secured as it contains the private key for
    your instance.

    This file is a self contained file and can be copied and used 'as is'
    For each Elastic product that you wish to configure, you should copy
    this '.p12' file to the relevant configuration directory
    and then follow the SSL configuration instructions in the product guide.

    For client applications, you may only need to copy the CA certificate and
    configure the client to trust this certificate.
    $ ll
    total 564
    drwxr-xr-x 2 ec2-user ec2-user 4096 Oct 29 04:45 bin
    drwxr-xr-x 2 ec2-user ec2-user 178 Nov 27 13:45 config
    drwxrwxr-x 3 ec2-user ec2-user 19 Nov 27 13:46 data
    -rw------- 1 ec2-user ec2-user 3451 Nov 27 15:10 elastic-certificates.p12 # 这里
    -rw------- 1 ec2-user ec2-user 2527 Nov 27 15:05 elastic-stack-ca.p12 # 还有这里
    drwxr-xr-x 9 ec2-user ec2-user 107 Oct 29 04:45 jdk
    drwxr-xr-x 3 ec2-user ec2-user 4096 Oct 29 04:45 lib
    -rw-r--r-- 1 ec2-user ec2-user 13675 Oct 29 04:38 LICENSE.txt
    drwxr-xr-x 2 ec2-user ec2-user 4096 Nov 27 14:48 logs
    drwxr-xr-x 37 ec2-user ec2-user 4096 Oct 29 04:45 modules
    -rw-r--r-- 1 ec2-user ec2-user 523209 Oct 29 04:45 NOTICE.txt
    drwxr-xr-x 2 ec2-user ec2-user 6 Oct 29 04:45 plugins
    -rw-r--r-- 1 ec2-user ec2-user 8500 Oct 29 04:38 README.textile
     

    这个命令生成格式为PKCS#12名称为 elastic-certificates.p12 的keystore文件,包含node证书、私钥、CA证书。

    这个命令生成的证书内部默认是不包含主机名信息的(他没有任何 Subject Alternative Name 字段),所以证书可以用在任何的node节点上,但是你必须配置elasticsearch关闭主机名认证。

    配置ES节点使用这个证书:

    $ mkdir config/certs
    $ mv elastic-* config/certs/
    $ ll config/certs/
    total 8
    -rw------- 1 ec2-user ec2-user 3451 Nov 27 15:10 elastic-certificates.p12
    -rw------- 1 ec2-user ec2-user 2527 Nov 27 15:05 elastic-stack-ca.p12

    # 拷贝这个目录到所有的ES节点中
    $ cp -a config/certs /opt/elk74/es02/config/
    $ cp -a config/certs /opt/elk74/es03/config/

    # 配置elasticsearch.yml配置文件,注意所有的node节点都需要配置,这里的配置是使用PKCS#12格式的证书。
    $ vim es01/config/elasticsearch.yml
    xpack.security.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate #认证方式使用证书
    xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
    xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

    # 如果你使用--pem生成PEM格式的,那么需要使用如下的配置:
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.key: /home/es/config/node01.key # 私钥
    xpack.security.transport.ssl.certificate: /home/es/config/node01.crt # 证书
    xpack.security.transport.ssl.certificate_authorities: [ "/home/es/config/ca.crt" ] # ca证书

    # 如果你生成的node证书设置了password,那么需要把password加入到elasticsearch 的keystore
    ## PKCS#12格式:
    bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
    bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password

    ## PEM格式
    bin/elasticsearch-keystore add xpack.security.transport.ssl.secure_key_passphrase
     

    注意:config/certs 目录中不需要拷贝CA证书文件,只拷贝cert文件即可。我这里是图方便。

    同时要注意把CA证书保存好,如果设置了CA证书密钥也要保护放,方便后期增加ES节点使用。

    xpack.security.transport.ssl.verification_mode 这里配置认证方式:参考官网

    • full,认证证书是否通过信任的CA证书签发的,同时认证server的hostname or IP address是否匹配证书中配置的。
    • certificate,我们这里采用的方式,只认证证书是否通过信任的CA证书签发的
    • none,什么也不认证,相当于关闭了SSL/TLS 认证,仅用于你非常相信安全的环境。

    配置了,然后再次启动ES节点测试:

    测试能够正常启动了。好了,我们再来继续之前的生成密码:在随意一台节点即可。

    $ ./es01/bin/elasticsearch-setup-passwords auto
    Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
    The passwords will be randomly generated and printed to the console.
    Please confirm that you would like to continue [y/N]y #输入y,确认继续


    Changed password for user apm_system
    PASSWORD apm_system = yc0GJ9QS4AP69pVzFKiX

    Changed password for user kibana
    PASSWORD kibana = UKuHceHWudloJk9NvHlX

    Changed password for user logstash_system
    PASSWORD logstash_system = N6pLSkNSNhT0UR6radrZ

    Changed password for user beats_system
    PASSWORD beats_system = BmsiDzgx1RzqHIWTri48

    Changed password for user remote_monitoring_user
    PASSWORD remote_monitoring_user = dflPnqGAQneqjhU1XQiZ

    Changed password for user elastic
    PASSWORD elastic = Tu8RPllSZz6KXkgZWFHv
     

    查看集群节点数量:

    $ curl -u elastic localhost:9200/_cat/nodes
    Enter host password for user 'elastic': # 输入elastic用户的密码:Tu8RPllSZz6KXkgZWFHv
    172.17.0.87 14 92 18 0.16 0.11 0.37 dilm - es02
    172.17.0.87 6 92 17 0.16 0.11 0.37 dilm - es03
    172.17.0.87 8 92 19 0.16 0.11 0.37 dilm * es01
     

    注意:

    这里只是配置了ES集群中node间通信启用了证书加密,HTTP API接口是使用用户名和密码的方式认证的,如果你需要更安全的SSL加密,请参考:TLS HTTP

    安全配置的参数,请参考

    好了,一个比较安全的Elasticsearch的集群就已经创建完毕了。

    kibana的安装配置

    下面开始安装kibana,方便通过浏览器访问。

    下载地址

    $ wget -c "https://artifacts.elastic.co/downloads/kibana/kibana-7.4.2-linux-x86_64.tar.gz"
    $ tar xf /opt/softs/elk7.4/kibana-7.4.2-linux-x86_64.tar.gz
    $ ln -s kibana-7.4.2-linux-x86_64 kibana
     

    配置kibana:

    $ cat kibana/config/kibana.yml |grep -Ev "^$|^#"
    server.port: 5601
    server.host: "0.0.0.0"
    server.name: "mykibana"
    elasticsearch.hosts: ["http://localhost:9200"]
    kibana.index: ".kibana"
    elasticsearch.username: "kibana"
    elasticsearch.password: "UKuHceHWudloJk9NvHlX"
    # i18n.locale: "en"
    i18n.locale: "zh-CN"
    xpack.security.encryptionKey: Hz*9yFFaPejHvCkhT*ddNx%WsBgxVSCQ # 自己随意生成的32位加密key
     

    kibana 命令帮助:

    $ ./kibana/bin/kibana --help

    Usage: bin/kibana [command=serve] [options]

    Kibana is an open source (Apache Licensed), browser based analytics and search dashboard for Elasticsearch.

    Commands:
    serve [options] Run the kibana server
    help <command> Get the help for a specific command

    "serve" Options:

    -e, --elasticsearch <uri1,uri2> Elasticsearch instances
    -c, --config <path> Path to the config file, use multiple --config args to include multiple config files (default: ["/opt/elk74/kibana-7.4.2-linux-x86_64/config/kibana.yml"])
    -p, --port <port> The port to bind to
    -q, --quiet Prevent all logging except errors
    -Q, --silent Prevent all logging
    --verbose Turns on verbose logging
    -H, --host <host> The host to bind to
    -l, --log-file <path> The file to log to
    --plugin-dir <path> A path to scan for plugins, this can be specified multiple times to specify multiple directories (default: ["/opt/elk74/kibana-7.4.2-linux-x86_64/plugins","/opt/elk74/kibana-7.4.2-linux-x86_64/src/legacy/core_plugins"])
    --plugin-path <path> A path to a plugin which should be included by the server, this can be specified multiple times to specify multiple paths (default: [])
    --plugins <path> an alias for --plugin-dir
    --optimize Optimize and then stop the server
    -h, --help output usage information
     

    访问kibana的IP:5601即可,可以看到登陆界面:

    输入上面生成的管理员elastic的用户和密码,就可以登陆了,我们查看一下license许可吧:

    查看一下Elasticsearch集群中的节点监控信息,包括CPU,负载,JVM使用率,磁盘空间,可以根据此来修改扩容ES node节点:

    一个使用永不过期的Basic许可的免费License,开启了基本的Auth认证和集群间SSL/TLS 认证的Elasticsearch集群就创建完毕了。

    等等,你有没有想过Kibana的配置文件中使用着明文的用户名密码,这里只能通过LInux的权限进行控制了,有没有更安全的方式呢,有的,就是keystore。

    kibana keystore 安全配置

    参考官网

    查看``kibana-keystore`命令帮助:

    $ ./bin/kibana-keystore --help
    Usage: bin/kibana-keystore [options] [command]

    A tool for managing settings stored in the Kibana keystore

    Options:
    -V, --version output the version number
    -h, --help output usage information

    Commands:
    create [options] Creates a new Kibana keystore
    list [options] List entries in the keystore
    add [options] <key> Add a string setting to the keystore
    remove [options] <key> Remove a setting from the keystore
     

    首先我们创建keystore:

    $ bin/kibana-keystore create
    Created Kibana keystore in /opt/elk74/kibana-7.4.2-linux-x86_64/data/kibana.keystore # 默认存放位置
     

    增加配置:

    我们要吧kibana.yml 配置文件中的敏感信息,比如:elasticsearch.username 和 elasticsearch.password,给隐藏掉,或者直接去掉;

    所以这里我们增加两个配置:分别是elasticsearch.password 和 elasticsearch.username:

    # 查看add的命令帮助:
    $ ./bin/kibana-keystore add --help
    Usage: add [options] <key>

    Add a string setting to the keystore

    Options:
    -f, --force overwrite existing setting without prompting
    -x, --stdin read setting value from stdin
    -s, --silent prevent all logging
    -h, --help output usage information

    # 创建elasticsearch.username这个key:注意名字必须是kibana.yml中的key
    $ ./bin/kibana-keystore add elasticsearch.username
    Enter value for elasticsearch.username: ****** # 输入key对应的value,这里是kibana连接es的账号:kibana

    # 创建elasticsearch.password这个key
    $ ./bin/kibana-keystore add elasticsearch.password
    Enter value for elasticsearch.password: ******************** # 输入对应的密码:UKuHceHWudloJk9NvHlX
     

    好了,我们把kibana.yml配置文件中的这两项配置删除即可,然后直接启动kibana,kibana会自动已用这两个配置的。

    最终的kibana.yml配置如下:

    server.port: 5601
    server.host: "0.0.0.0"
    server.name: "mykibana"
    elasticsearch.hosts: ["http://localhost:9200"]
    kibana.index: ".kibana"
    # i18n.locale: "en"
    i18n.locale: "zh-CN"
    xpack.security.encryptionKey: Hz*9yFFaPejHvCkhT*ddNx%WsBgxVSCQ # 自己随意生成的32位加密key
     

    这样配置文件中就不会出现敏感信息了,达到了更高的安全性。

    类似的Keystore方式不只是Kibana支持,ELK的产品都是支持的。

    安装Elasticsearch Head插件

    GitHub:

    https://github.com/mobz/elasticsearch-head

    WebSite:

    http://mobz.github.io/elasticsearch-head/

    我们这里使用最简单的方式安装,安装Elasticsearch Head Chrome 插件:

    Chrome打开如下地址,进行安装:

    https://chrome.google.com/webstore/detail/elasticsearch-head/ffmkiejjmecolpfloofpjologoblkegm/

    安装完毕后,直接点击ES Head图标即可。

    注意:通过这种Chrome 插件方式安装ES Head不需要开启ES集群的CORS跨域配置:

    https://github.com/mobz/elasticsearch-head#enable-cors-in-elasticsearch

    输入ES集群的连接地址,点击连接,然后输入用户名密码即可:

    到这里,文章已经基本完毕,下面是一些小技巧:

    生产环境中整个集群重启和滚动重启的正确操作

    比如我们后期可能要对整个集群的重启,或者呢,更改一些配置,需要一台一台的重启集群中的每个节点,因为在重启的时候ES集群会自动复制下线节点的shart到其他的节点上,并再平衡node间的shart,会产生很大的IO的,但是这个IO操作是完全没有必要的。

    参考官网

    关闭shard allocation

    curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
    {
    "persistent": {
    "cluster.routing.allocation.enable": "primaries"
    }
    }
    '
     

    关闭索引和synced flush

    curl -X POST "localhost:9200/_flush/synced?pretty"
     

    做完上面两步的话再关闭整个集群;待变更完配置后,重新启动集群,然后在打开之前关闭的shard allocation:

    打开shard allocation

    curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
    {
    "persistent": {
    "cluster.routing.allocation.enable": null
    }
    }
    '
     

    对于ES集群node节点轮训重启的操作时,在关闭每个节点之前都先执行上面两步关闭的操作,然后关闭这个节点,做变更操作,然后在启动该节点,然后在打开shard allocation,等待ES集群状态变为Green后,再进行第二台,然后依次类推。

  • 相关阅读:
    T2487 公交司机(搜索题)(小L的一生)
    T2485 汉诺塔升级版(普及)(递归)
    T2483 电梯(模拟题)
    将图片返回到前端
    session
    TCP协议
    socket
    断点调试
    解析字符串
    Cookie
  • 原文地址:https://www.cnblogs.com/mscm/p/13282393.html
Copyright © 2011-2022 走看看