zoukankan      html  css  js  c++  java
  • Elasticsearch 异常处理

    cluster_block_exception

    https://stackoverflow.com/questions/50609417/elasticsearch-error-cluster-block-exception-forbidden-12-index-read-only-all

    在向index插入数据的时候报错

    Request

    PUT http://localhost:9200/customer/_doc/1?pretty

    PUT data:
    {
    "name": "John Doe"
    }

    Response

    {
    "error" : {
    "root_cause" : [
    {
    "type" : "cluster_block_exception",
    "reason" : "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
    }
    ],
    "type" : "cluster_block_exception",
    "reason" : "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
    },
    "status" : 403
    }

    https://stackoverflow.com/questions/50609417/elasticsearch-error-cluster-block-exception-forbidden-12-index-read-only-all

    This happens when Elasticsearch thinks the disk is running low on space so it puts itself into read-only mode.

    By default Elasticsearch's decision is based on the percentage of disk space that's free, so on big disks this can happen even if you have many gigabytes of free space.

    The flood stage watermark is 95% by default, so on a 1TB drive you need at least 50GB of free space or Elasticsearch will put itself into read-only mode.

    For docs about the flood stage watermark see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/disk-allocator.html.

    The right solution depends on the context - for example a production environment vs a development environment.

    Solution 1: free up disk space

    Freeing up enough disk space so that more than 5% of the disk is free will solve this problem.

    Elasticsearch won't automatically take itself out of read-only mode once enough disk is free though, you'll have to do something like this to unlock the indices:

    $ curl -XPUT -H "Content-Type: application/json" https://[YOUR_ELASTICSEARCH_ENDPOINT]:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

    https://curl.haxx.se/docs/manpage.html

    -d参数是数据的意思 

     设置完成之后,

    {
    "acknowledged": true
    }

    Solution 2: change the flood stage watermark setting

    Change the "cluster.routing.allocation.disk.watermark.flood_stage" setting to something else.

    It can either be set to a lower percentage or to an absolute value.

    Here's an example of how to change the setting from the docs:

    PUT _cluster/settings
    {
      "transient": {
        "cluster.routing.allocation.disk.watermark.low": "100gb",
        "cluster.routing.allocation.disk.watermark.high": "50gb",
        "cluster.routing.allocation.disk.watermark.flood_stage": "10gb",
        "cluster.info.update.interval": "1m"
      }
    }

    Again, after doing this you'll have to use the curl command above to unlock the indices, but after that they should not go into read-only mode again.

    The bulk request must be terminated by a newline

    {
    "error" : {
    "root_cause" : [
    {
    "type" : "illegal_argument_exception",
    "reason" : "The bulk request must be terminated by a newline [ ]"
    }
    ],
    "type" : "illegal_argument_exception",
    "reason" : "The bulk request must be terminated by a newline [ ]"
    },
    "status" : 400
    }

    在批量导入数据的时候出错

     https://stackoverflow.com/questions/48579980/bulk-request-throws-error-in-elasticsearch-6-1-1

    Add empty line at the end of the JSON file and save the file and then try to run the below command

    curl -XPOST localhost:9200/subscribers/ppl/_bulk?pretty --data-binary @customers_full.json -H 'Content-Type: application/json'

    I hope it works fine for you.

  • 相关阅读:
    $prufer$序列
    倍增
    二分
    英语词汇速查表
    ACM模拟赛
    Trie树
    关于军训的模拟赛-R2
    树上差分
    列队
    斜率优化dp
  • 原文地址:https://www.cnblogs.com/chucklu/p/10552601.html
Copyright © 2011-2022 走看看