zoukankan      html  css  js  c++  java
  • Elasticsearch配置文件说明

     

    一、Cluster  setting

    1. Cluster

    indices.ttl.interval  允许设置多久过期的文件会被自动删除。默认值是60秒。

    indices.cache.filter.size  ES的filter cache有两种,一种是node级别的cache(filter cache默认类型),一种是index级别的filter cache。Node级别的cache被整个node共享,并且可以使用百分比设置,对应的属性为index.cache.filter.size,这个属性的值可以使百分比,也可以是具体的大小。Index级别的cache,顾名思义,就是针对单个索引的大小。ES官方并不推荐使用这种设置,因为谁也无法预测索引级别的缓存到底有多大(可能非常大,超过了node的内存),一个索引可能分布在多个node上面,而多个node的结果如果汇总到一个node上,其结果可想而知。默认值是10%。

    discovery.zen.minimum_master_nodes        避免脑裂现象(由于某些节点的失效,部分节点的网络连接会断开,并形成一个与原集群一样的集群,这种情况称为集群脑裂现象)discovery.zen.minimum_master_nodes参数决定了要选举一个master需要多少个节点(最少候选节点数)。默认值是1。根据一般经验这个一般设置成N/2+1(向下取整),N是集群中节点的数量。

    1. routing

    node_initial_primaries_recoveries  每个初选节点允许控制初始修复的具体数量

    cluster_concurrent_rebalance  控制集群宽度允许的分片平衡可以并发多少

    awareness.attributes  集群配置意识允许配置分片和副本分配在与节点相关联的通用属性

    node_concurrent_recoveries  每个节点上允许多少个并发修复,默认是2

    disable_allocation   允许禁用主要分配

    disable_replica_allocation   允许禁用副本分配

    1. recovery

    concurrent_streams  设置当从对等中恢复碎片时限制打开的并发流的数量 默认5

    file_chunk_size  文件块大小  默认512kb

    translog_ops  

    translog_size

    max_bytes_per_sec   设置恢复时每分钟油门的吞吐量  默认20mb

    compress  启用压缩为所有节点间的通信   默认禁用

    二、Elasticsearch Configuration Example

    #####################Elasticsearch Configuration Example #####################

    # This file contains an overview of various configuration settings,

    #本文件针对操作人员包含了对不同配置设置的概述

    # targeted at operations staff. Application developers should

    # consult the guide at <http://elasticsearch.org/guide>.

    #应用程序开发人员应该参考http://elasticsearch.org/guide指南

    # The installation procedure is covered at

    # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html>.

    #

    #安装过程在<http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html>

    # Elasticsearch comes with reasonable defaults for most settings,

    # so you can try it out without bothering with configuration.

    #                                                  

    #ela本身附带最合理的默认值设置,所以你可以不用修改配置文件直接使用

    # Most of the time, these defaults are just fine for running a production

    # cluster. If you're fine-tuning your cluster, or wondering about the

    #  effect of certain configuration option, please _do ask_ on the

    # mailing list or IRC channel [http://elasticsearch.org/community].

    #大多时候,这些默认设置可以很好的运行生产环境集群。

    #如果你微调你的集群或者考虑到某些配置选项的效果,请在邮件列表或者IRC频道[http://elasticsearch.org/community]询问。

    # Any element in the configuration can be replaced with environment variables

    # by placing them in ${...} notation. For example:

    #

    #任何组成的配置可以使用环境变量${…}替换他们,例如:

    #node.rack: ${RACK_ENV_VAR}

    # For information on supported formats and syntax for the config file, see

    #<http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html>

    #支持的格式和语法信息的配置文件,请参考<http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html>

    三、Cluster

    ################################### Cluster ###################################

    ################################## 集群设置###################################

    # Cluster name identifies your cluster for auto-discovery. If you're running

    # multiple clusters on the same network, make sure you're using unique names.

    #                           

    #集群名称识别集群自动发现。如果在同一网段运行多个集群,要确保他们使用唯一的名称。

    cluster.name: log-center-it-test

    #集群名称

    cluster.routing.allocation.disk.watermark.low: "90%"

    cluster.routing.allocation.disk.watermark.high: "96%"

    indices.fielddata.cache.size: "30%"

    索引字段数据缓存大小

    四、Node

    #################################### Node #####################################

    ################################## 节点设置###################################

    # Node names are generated dynamically on startup, so you're relieved

    # from configuring them manually. You can tie this node to a specific name:

    #

    #节点名称是在启动的时候动态生成的,免去了手动配置。你也可以为这个节点取一个特定的名字:

    node.name: "ip"

    # Every node can be configured to allow or deny being eligible as the master,

    # and to allow or deny to store the data.

    #

    #每个节点都可以被配置为允许或者拒绝成为主节点,并且允许或拒绝存储数据。

    # Allow this node to be eligible as a master node (enabled by default):

    #

    #允许这个节点成为主节点(默认是允许的)

    #node.master: true

    #

    # Allow this node to store data (enabled by default):

    #

    #允许这个节点存储数据(默认是允许的)

    #node.data: true

    # You can exploit these settings to design advanced cluster topologies.

    #

    #你可以利用这些设置设计先进的集群拓扑。

    # 1. You want this node to never become a master node, only to hold data.

    #    This will be the "workhorse" of your cluster.

    #

    #1.不想这个节点成为主节点,只作为存储数据节点。它会成为集群的驮马(重负荷机器)

    node.master: false

    node.data: true

    #

    # 2. You want this node to only serve as a master: to not store any data and

    #    to have free resources. This will be the "coordinator" of your cluster.

    #

    #2.想要这个节点只作为主节点,不存储任何数据和有免费资源。他会成为集群的协调员

    #node.master: true

    #node.data: false

    #

    # 3. You want this node to be neither master nor data node, but

    #    to act as a "search load balancer" (fetching data from nodes,

    #    aggregating results, etc.)

    #

    #3.想要这个节点既不作为主节点也不作为数据节点,只作为搜索负载平衡器(从节点获取数据,聚合结果,等等)

    #node.master: false

    #node.data: false

    # Use the Cluster Health API [http://localhost:9200/_cluster/health], the

    # Node Info API [http://localhost:9200/_nodes] or GUI tools

    # such as <http://www.elasticsearch.org/overview/marvel/>,

    # <http://github.com/karmi/elasticsearch-paramedic>,

    # <http://github.com/lukas-vlcek/bigdesk> and

    # <http://mobz.github.com/elasticsearch-head> to inspect the cluster state.

    #使用集群健康API,节点信息API或者GUI工具,如。。、。。和。。检查集群的状态。

    # A node can have generic attributes associated with it, which can later be used

    # for customized shard allocation filtering, or allocation awareness. An attribute

    # is a simple key value pair, similar to node.key: value, here is an example:

    #

    #一个节点可以有与后来被用于定制的碎片分配过滤或分配意识相关联的通用的属性,一个属性是一个简单的键值对,类似于节点。

    #node.rack: rack314

    # By default, multiple nodes are allowed to start from the same installation location

    # to disable it, set the following:

    #默认的,多个节点可以从相同的安装位置来禁用它,设置如下:

    #node.max_local_storage_nodes: 1

    五、Index

    #################################### Index ####################################

    # You can set a number of options (such as shard/replica options, mapping

    # or analyzer definitions, translog settings, ...) for indices globally,

    # in this file.

    #

    #可以在这个文件中为全局索引设置一些选项(如分片/副本操作,映像或分析仪定义,translog设置)

    # Note, that it makes more sense to configure index settings specifically for

    # a certain index, either when creating it or by using the index templates API.

    #

    #注意,当创建或者通过使用API索引模板,为某一个特定的索引配置索引会是更有意义的配置。

    #See<http://elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html> and

    #<http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices-create-index.html>

    # for more information.

    # Set the number of shards (splits) of an index (5 by default):

    #

    #设置一个索引的分片数(默认是5):

    #index.number_of_shards: 5

    # Set the number of replicas (additional copies) of an index (1 by default):

    #

    #设置一个索引的副本数(默认是1):

    #index.number_of_replicas: 1

    # Note, that for development on a local machine, with small indices, it usually

    # makes sense to "disable" the distributed features:

    #

    #注意,在本地机器上开发,一个小的索引通常会有意义的禁用分布式的特性(通俗易懂的说就是不需要副本了,因为索引太小,没必要建副本。)

    #index.number_of_shards: 1

    #index.number_of_replicas: 0

    # These settings directly affect the performance of index and search operations

    # in your cluster. Assuming you have enough machines to hold shards and

    # replicas, the rule of thumb is replicas, the rule of thumb is:

    #

    #这些设置直接影响集群中索引和搜索操作的性能。假如你有足够机器来存储分片和副本,拇指律(作为一项经验法则)是副本,拇指律是:

    # 1. Having more *shards* enhances the _indexing_ performance and allows to

    #    _distribute_ a big index across machines.

    # 2. Having more *replicas* enhances the _search_ performance and improves the

    #    cluster _availability_.

    #

    #1.有很多的分片提高索引的性能并且允许在机器上分配更大的索引。

    #2.有更多的副本提高搜索的性能并且提高集群的可利用性。

    # The "number_of_shards" is a one-time setting for an index.

    #

    # "number_of_shards"对一个索引的一次性设置

    # The "number_of_replicas" can be increased or decreased anytime,

    # by using the Index Update Settings API.

    #

    #"number_of_replicas"可以在任何时间增加或者取消,通过使用更新设置索引API

    # Elasticsearch takes care about load balancing, relocating, gathering the

    # results from nodes, etc. Experiment with different settings to fine-tune

    # your setup.

    #ela关注的是负载均衡,迁移,来自节点收集结果,等。尝试不同的设置来调整你的设置。

             

    # Use the Index Status API (<http://localhost:9200/A/_status>) to inspect

    # the index status.

    #用索引状态API来检测索引状态。

    六、Paths

    ################################### Paths ####################################

    ################################# 路径设置##################################

    # Path to directory containing configuration (this file and logging.yml):

    #

    #配置文件的路径

    #path.conf: /path/to/conf

    # Path to directory where to store index data allocated for this node.

    #

    #存储为该节点分配的索引数据的路径

    #path.data: /path/to/data

    #

    # Can optionally include more than one location, causing data to be striped across

    # the locations (a la RAID 0) on a file level, favouring locations with most free

    # space on creation. For example:

    #

    #能够包含多个位置,导致数据在整个范围在文件级被剥离,支持在创新的时候可以有最自由的空间(并不懂什么意思)

    #path.data: /path/to/data1,/path/to/data2

    # Path to temporary files:

    #

    #临时文件的路径

    #path.work: /path/to/work

    # Path to log files:

    #

    #日志文件的路径

    #path.logs: /path/to/logs

    # Path to where plugins are installed:

    #

    #安装插件的路径

    #path.plugins: /path/to/plugins

    七、Plugin

    #################################### Plugin ###################################

    #################################### 插件 ###################################

    # If a plugin listed here is not installed for current node, the node will not start.

    #

    #如果这里列出的插件没有安装在当前节点,该节点将无法启动

    #plugin.mandatory: mapper-attachments,lang-groovy

    八、Memory

    ################################### Memory ####################################

    ################################### 内存 ####################################

    # Elasticsearch performs poorly when JVM starts swapping: you should ensure that

    # it _never_ swaps.

    #

    #当JVM开始启动时,ela执行很差:应该确保它是it _never_ swaps

    # Set this property to true to lock the memory:

    #

    #将这个属性设置为true来锁定内存

    #bootstrap.mlockall: true

    bootstrap.mlockall: true

    # Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set

    # to the same value, and that the machine has enough memory to allocate

    #  for Elasticsearch, leaving enough memory for the operating system itself.

    #

    #确保ES_MIN_MEM和ES_MAX_MEM环境变量设置为相同的值,而机器为ela有足够的内存分配,为操作系统本身留下足够的内存。

    # You should also make sure that the Elasticsearch process is allowed to lock

    # the memory, eg. by using `ulimit -l unlimited`.

    #应该确保ela过程允许锁定内存,如通过使用`ulimit -l unlimited`

    九、Network And HTTP

    ############################## Network And HTTP ###############################

    ##############################网络和HTTP设置 ###############################

    # Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens

    # on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node

    # communication. (the range means that if the port is busy, it will automatically

    # try the next port).

    #ela默认情况下,结合本身的0.0.0.0地址和端口上侦听http传输(9200-9300)端口和(9300-9400)为节点到节点通信传输端口。(这意味着如果端口繁忙,他会自动尝试下一个端口)。

    # Set the bind address specifically (IPv4 or IPv6):

    #

    #设置绑定地址(IPv4或者IPv6):

    #network.bind_host: 192.168.0.1

    # Set the address other nodes will use to communicate with this node. If not

    # set, it is automatically derived. It must point to an actual IP address.

    #

    #设置地址其他节点将使用此地址与这个节点通信,如果没有设置,他会自动派生。必须指向一个实际的IP地址。

    #network.publish_host: 192.168.0.1

    # Set both 'bind_host' and 'publish_host':

    #

    #设置绑定主机和发布主机:

    #network.host: 192.168.0.1

    # Set a custom port for the node to node communication (9300 by default):

    #

    #为节点到节点通信设置自定义端口(默认是9300)

    #transport.tcp.port: 9300

    # Enable compression for all communication between nodes (disabled by default):

    #

    #启用压缩为所有节点之间通信(默认情况下禁用):

    #transport.tcp.compress: true

    # Set a custom port to listen for HTTP traffic:

    #

    #设置HTTP传输端口

    #http.port: 9200

    # Set a custom allowed content length:

    #

    #设置一个自定义允许内容长度

    #http.max_content_length: 100mb

    # Disable HTTP completely:

    #

    #禁用HTTP完全

    #http.enabled: false

    十、Gateway

    ################################### Gateway ###################################

    # The gateway allows for persisting the cluster state between full cluster

    # restarts. Every change to the state (such as adding an index) will be stored

    # in the gateway, and when the cluster starts up for the first time,

    # it will read its state from the gateway.

    #网关允许坚持全面重启集群之间的集群状态。每一个变化的状态(例如添加索引)将存储在网关中,当集群首次启动时,它将从网关读取它的状态。

    # There are several types of gateway implementations. For more information, see

    # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway.html>.

    # The default gateway type is the "local" gateway (recommended):

    #

    #有几种类型的网关的实现。更多的信息请参见。。

    #默认的网关类型是本地网关(推荐):

    #gateway.type: local

    # Settings below control how and when to start the initial recovery process on

    # a full cluster restart (to reuse as much local data as possible when using shared

    # gateway).

    #设置在集群重启时控制如何什么时间开始最初的恢复进程(当使用分享网关时尽可能的重用本地数据)

    # Allow recovery process after N nodes in a cluster are up:

    #

    #集群中的N个节点启动之后允许恢复进程:

    #gateway.recover_after_nodes: 1

    # Set the timeout to initiate the recovery process, once the N nodes

    # from previous setting are up (accepts time value):

    #

    #为启动恢复过程设置超时,一旦先前设置的n个节点开启(接受时间价值):

    #gateway.recover_after_time: 5m

    # Set how many nodes are expected in this cluster. Once these N nodes

    # are up (and recover_after_nodes is met), begin recovery process immediately

    # (without waiting for recover_after_time to expire):

    #

    #设置集群中期望有多少个结节点,一旦这些节点启动,立即开始恢复过程。

    #gateway.expected_nodes: 2

     

    十一、Recovery Throttling

    ############################# Recovery Throttling #############################

    ############################# 恢复限流 #############################

    # These settings allow to control the process of shards allocation between

    # nodes during initial recovery, replica allocation, rebalancing,

    # or when adding and removing nodes.

    #这些设置允许在最初恢复、副本分配、平衡或者添加删除节点时在节点之间控制分片的过程,

    # Set the number of concurrent recoveries happening on a node:

    #

    #设置一个节点上并发复苏的数量

    # 1. During the initial recovery

    #

    #1.最初恢复

    #cluster.routing.allocation.node_initial_primaries_recoveries: 4

    #

    # 2. During adding/removing nodes, rebalancing, etc

    #

    #2.在添加、删除节点,再平衡,等等

    #cluster.routing.allocation.node_concurrent_recoveries: 2

    # Set to throttle throughput when recovering (eg. 100mb, by default 20mb):

    #

    #当恢复的时候设置油门吞吐量(例如100mb,默认20mb)

    #indices.recovery.max_bytes_per_sec: 20mb

    # Set to limit the number of open concurrent streams when

    # recovering a shard from a peer:

    #

    #当分片从同位体中恢复时设置限制打开并发流的数量

    #indices.recovery.concurrent_streams: 5

    十二、Discovery

    ################################## Discovery ##################################

    ################################## 发现 ##################################

    # Discovery infrastructure ensures nodes can be found within a cluster

    # and master node is elected. Multicast discovery is the default.

    #发现基础设施确保节点能够在集群中被发现并且主节点被选出。多播发现是默认的。

    # Set to ensure a node sees N other master eligible nodes to be considered

    # operational within the cluster. This should be set to a quorum/majority of

    # the master-eligible nodes in the cluster.

    #

    #确保一个节点在集群中和在N个其他有资格的节点中被认为是操作者。#discovery.zen.minimum_master_nodes: 1

    # Set the time to wait for ping responses from other nodes when discovering.

    # Set this option to a higher value on a slow or congested network

    # to minimize discovery failures:

    #

    #当发现节点的时候设置等待来自其他节点的ping的回应的时间

    #设置这个操作在缓慢和拥挤的网络中达到一个更高的值:

    #discovery.zen.ping.timeout: 3s

    discovery.zen.ping.timeout: 120s

    # For more information, see

    # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html>

    # Unicast discovery allows to explicitly control which nodes will be used

    # to discover the cluster. It can be used when multicast is not present,

    # or to restrict the cluster communication-wise.

    #

    #单播发现允许显示的控制哪些节点将被用来发现集群。它可以用在当多播不生效或者限制集群的通信

    # 1. Disable multicast discovery (enabled by default):

    #

    #1.禁止多播发现(默认启用)

    #discovery.zen.ping.multicast.enabled: false

    #

    # 2. Configure an initial list of master nodes in the cluster

    #    to perform discovery when new nodes (master or data) are started:

    #

    #2.在集群中配置一个初始主节点列表来执行发现,当新的节点(主或者数据)开启的时候

    #discovery.zen.ping.unicast.hosts: ["host1", "host2:port"]

    # EC2 discovery allows to use AWS EC2 API in order to perform discovery.

    #

    #EC2发现允许使用AWS WC2 API来执行发现

    # You have to install the cloud-aws plugin for enabling the EC2 discovery.

    #

    #必须安装cloud-aws插件启用EC2的发现

    # For more information, see

    # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-ec2.html>

    #

    # See <http://elasticsearch.org/tutorials/elasticsearch-on-ec2/>

    # for a step-by-step tutorial.

    # GCE discovery allows to use Google Compute Engine API in order to perform discovery.

    #

    #GCE发现允许使用谷歌计算引擎API来执行发现

    # You have to install the cloud-gce plugin for enabling the GCE discovery.

    #

    #必须安装cloud-gce插件启用GCE发现

    # For more information, see <https://github.com/elasticsearch/elasticsearch-cloud-gce>.

    # Azure discovery allows to use Azure API in order to perform discovery.

    #

    #Azure发现允许使用Azure API来执行发现

    # You have to install the cloud-azure plugin for enabling the Azure discovery.

    #

    #必须安装cloud-azure插件启用Azure的发现

    # For more information, see <https://github.com/elasticsearch/elasticsearch-cloud-azure>.

    十三、Slow Log

    ################################## Slow Log ##################################

    ############################ 日志 ##################################

    # Shard level query and fetch threshold logging.

    #index.search.slowlog.threshold.query.warn: 10s

    #index.search.slowlog.threshold.query.info: 5s

    #index.search.slowlog.threshold.query.debug: 2s

    #index.search.slowlog.threshold.query.trace: 500ms

    #index.search.slowlog.threshold.fetch.warn: 1s

    #index.search.slowlog.threshold.fetch.info: 800ms

    #index.search.slowlog.threshold.fetch.debug: 500ms

    #index.search.slowlog.threshold.fetch.trace: 200ms

    #index.indexing.slowlog.threshold.index.warn: 10s

    #index.indexing.slowlog.threshold.index.info: 5s

    #index.indexing.slowlog.threshold.index.debug: 2s

    #index.indexing.slowlog.threshold.index.trace: 500ms

    ################################## GC Logging ################################

    #monitor.jvm.gc.young.warn: 1000ms

    #monitor.jvm.gc.young.info: 700ms

    #monitor.jvm.gc.young.debug: 400ms

    #monitor.jvm.gc.old.warn: 10s

    #monitor.jvm.gc.old.info: 5s

    #monitor.jvm.gc.old.debug: 2s

    ################################## Security ################################

    # Uncomment if you want to enable JSONP as a valid return transport on the

    # http server. With this enabled, it may pose a security risk, so disabling

    # it unless you need it is recommended (it is disabled by default).

    #

    #http.jsonp.enable: true

  • 相关阅读:
    c#基础之集合
    找出子字符串在字符串中的所有索引
    c# 排序
    C#基础之枚举
    验证用户名不为空并且不存在
    验证用户名和密码,输入三次不正确就锁定账号
    c#基础
    linux使用
    python之logging模块
    手写MyBatis,纯手工打造开源框架(第三篇:运筹帷幄)
  • 原文地址:https://www.cnblogs.com/qfdxxdr/p/5773799.html
Copyright © 2011-2022 走看看