zoukankan      html  css  js  c++  java
  • 使用elasticdump迁移es数据

    安装elasticdump

    github地址:https://github.com/elasticsearch-dump/elasticsearch-dump

    # yum -y install npm
    # npm config set registry https://registry.npm.taobao.org/
    # npm install -g n
    
    #### # 默认安装的npm版本是3.10.0,版本太低了,安装elasticdump会报错,升级到8.0.0就可以了
    # n latest
    # npm install elasticdump -g
    # elasticdump --version
    6.75.0
    

    迁移

    # Copy an index from production to staging with analyzer and mapping:
    elasticdump 
      --input=http://production.es.com:9200/my_index 
      --output=http://staging.es.com:9200/my_index 
      --type=analyzer
    elasticdump 
      --input=http://production.es.com:9200/my_index 
      --output=http://staging.es.com:9200/my_index 
      --type=mapping
    elasticdump 
      --input=http://production.es.com:9200/my_index 
      --output=http://staging.es.com:9200/my_index 
      --type=data
    
    # Backup index data to a file:
    elasticdump 
      --input=http://production.es.com:9200/my_index 
      --output=/data/my_index_mapping.json 
      --type=mapping
    elasticdump 
      --input=http://production.es.com:9200/my_index 
      --output=/data/my_index.json 
      --type=data
    
    # Backup and index to a gzip using stdout:
    elasticdump 
      --input=http://production.es.com:9200/my_index 
      --output=$ 
      | gzip > /data/my_index.json.gz
    
    # Backup the results of a query to a file
    elasticdump 
      --input=http://production.es.com:9200/my_index 
      --output=query.json 
      --searchBody="{"query":{"term":{"username": "admin"}}}"
      
    # Specify searchBody from a file
    elasticdump 
      --input=http://production.es.com:9200/my_index 
      --output=query.json 
      --searchBody=@/data/searchbody.json  
    
    # Copy a single shard data:
    elasticdump 
      --input=http://es.com:9200/api 
      --output=http://es.com:9200/api2 
      --input-params="{"preference":"_shards:0"}"
    
    # Backup aliases to a file
    elasticdump 
      --input=http://es.com:9200/index-name/alias-filter 
      --output=alias.json 
      --type=alias
    
    # Import aliases into ES
    elasticdump 
      --input=./alias.json 
      --output=http://es.com:9200 
      --type=alias
    
    # Backup templates to a file
    elasticdump 
      --input=http://es.com:9200/template-filter 
      --output=templates.json 
      --type=template
    
    # Import templates into ES
    elasticdump 
      --input=./templates.json 
      --output=http://es.com:9200 
      --type=template
    
    # Split files into multiple parts
    elasticdump 
      --input=http://production.es.com:9200/my_index 
      --output=/data/my_index.json 
      --fileSize=10mb
    
    # Import data from S3 into ES (using s3urls)
    elasticdump 
      --s3AccessKeyId "${access_key_id}" 
      --s3SecretAccessKey "${access_key_secret}" 
      --input "s3://${bucket_name}/${file_name}.json" 
      --output=http://production.es.com:9200/my_index
    
    # Export ES data to S3 (using s3urls)
    elasticdump 
      --s3AccessKeyId "${access_key_id}" 
      --s3SecretAccessKey "${access_key_secret}" 
      --input=http://production.es.com:9200/my_index 
      --output "s3://${bucket_name}/${file_name}.json"
    
    # Import data from MINIO (s3 compatible) into ES (using s3urls)
    elasticdump 
      --s3AccessKeyId "${access_key_id}" 
      --s3SecretAccessKey "${access_key_secret}" 
      --input "s3://${bucket_name}/${file_name}.json" 
      --output=http://production.es.com:9200/my_index
      --s3ForcePathStyle true
      --s3Endpoint https://production.minio.co
    
    # Export ES data to MINIO (s3 compatible) (using s3urls)
    elasticdump 
      --s3AccessKeyId "${access_key_id}" 
      --s3SecretAccessKey "${access_key_secret}" 
      --input=http://production.es.com:9200/my_index 
      --output "s3://${bucket_name}/${file_name}.json"
      --s3ForcePathStyle true
      --s3Endpoint https://production.minio.co
    
    # Import data from CSV file into ES (using csvurls)
    elasticdump 
      # csv:// prefix must be included to allow parsing of csv files
      # --input "csv://${file_path}.csv" 
      --input "csv:///data/cars.csv"
      --output=http://production.es.com:9200/my_index 
      --csvSkipRows 1    # used to skip parsed rows (this does not include the headers row)
      --csvDelimiter ";" # default csvDelimiter is ','
    
    

    源es或目的es访问需要账号密码的形式

    # 在地址前面加上user:password@
    --input=http://192.168.1.2:9200/my_index --output=http://user:password@192.168.1.2:9200/my_index --type=data
    
     elasticdump 
       --input=http://192.168.1.1:9200/my_index 
       --output=http://192.168.3.2:9200/my_index 
       --type=analyzer
     elasticdump 
       --input=http://192.168.1.1:9200/my_index 
       --output=http://192.168.3.2:9200/my_index 
       --type=settings
     elasticdump 
      --input=http://192.168.1.1:9200/my_index 
      --output=http://192.168.3.2:9200/my_index 
      --type=mapping
    
  • 相关阅读:
    28图结构的类实现
    27图的拓扑排序
    26最短路径之Floyd算法
    25最短路径之Dijkstra算法
    24最小生成树之Prim算法
    23最小生成树之Kruskal算法
    22-1图的遍历的源代码
    22图的遍历
    21图结构的基本概念
    20树结构的类实现
  • 原文地址:https://www.cnblogs.com/sanduzxcvbnm/p/15386489.html
Copyright © 2011-2022 走看看