zoukankan      html  css  js  c++  java
  • redis 迁移工具 redisshake

    Redis-shake is a tool for synchronizing data between two redis databases.

    GitHub地址:https://github.com/alibaba/RedisShake

    1、下载并安装

    wget -c https://github.com/alibaba/RedisShake/releases/download/release-v1.6.24-20191220/redis-shake-1.6.24.tar.gz
    
    tar -zxvf  redis-shake-1.6.24.tar.gz
    
    cd  redis-shake-1.6.24.tar.gz

    2、配置文件 vim redis-shake.conf

    # this is the configuration of redis-shake.
    # if you have any problem, please visit https://github.com/alibaba/RedisShake/wiki/FAQ
    
    # id
    id = redis-shake
    
    # log file,日志文件,不配置将打印到stdout (e.g. /var/log/redis-shake.log )
    log.file = /data/db_tools/soft/redis/redis-shake-1.6.24/redis-shake-new.log
    # log level: "none", "error", "warn", "info", "debug", "all". default is "info". "debug" == "all"
    log.level = info
    # pid path,进程文件存储地址(e.g. /var/run/),不配置将默认输出到执行下面,
    # 注意这个是目录,真正的pid是`{pid_path}/{id}.pid`
    pid_path = 
    
    # pprof port.
    system_profile = 9310
    # restful port, set -1 means disable, in `restore` mode RedisShake will exit once finish restoring RDB only if this value
    # is -1, otherwise, it'll wait forever.
    # restful port,查看metric端口, -1表示不启用,如果是`restore`模式,只有设置为-1才会在完成RDB恢复后退出,否则会一直block。
    http_profile = 9320
    
    # parallel routines number used in RDB file syncing. default is 64.
    # 启动多少个并发线程同步一个RDB文件。
    parallel = 4
    
    # source redis configuration.
    # used in `dump`, `sync` and `rump`.
    # source redis type, e.g. "standalone" (default), "sentinel" or "cluster".
    #   1. "standalone": standalone db mode.
    #   2. "sentinel": the redis address is read from sentinel.
    #   3. "cluster": the source redis has several db.
    #   4. "proxy": the proxy address, currently, only used in "rump" mode.
    # 源端redis的类型,支持standalone,sentinel,cluster和proxy四种模式,注意:目前proxy只用于rump模式。
    source.type = standalone
    # ip:port
    # the source address can be the following:
    #   1. single db address. for "standalone" type.
    #   2. ${sentinel_master_name}:${master or slave}@sentinel single/cluster address, e.g., mymaster:master@127.0.0.1:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for "sentinel" type.
    #   3. cluster that has several db nodes split by semicolon(;). for "cluster" type. e.g., 10.1.1.1:20331;10.1.1.2:20441.
    #   4. proxy address(used in "rump" mode only). for "proxy" type.
    # 源redis地址。对于sentinel或者开源cluster模式,输入格式为"master名字:拉取角色为master或者slave@sentinel的地址",别的cluster
    # 架构,比如codis, twemproxy, aliyun proxy等需要配置所有master或者slave的db地址。
    source.address = source_ip:6381
    # password of db/proxy. even if type is sentinel.
    source.password_raw = AComCdgN09srE
    # auth type, don't modify it
    source.auth_type = auth
    # tls enable, true or false. Currently, only support standalone.
    # open source redis does NOT support tls so far, but some cloud versions do.
    source.tls_enable = false
    # input RDB file.
    # used in `decode` and `restore`.
    # if the input is list split by semicolon(;), redis-shake will restore the list one by one.
    # 如果是decode或者restore,这个参数表示读取的rdb文件。支持输入列表,例如:rdb.0;rdb.1;rdb.2
    # redis-shake将会挨个进行恢复。
    source.rdb.input = local
    # the concurrence of RDB syncing, default is len(source.address) or len(source.rdb.input).
    # used in `dump`, `sync` and `restore`. 0 means default.
    # This is useless when source.type isn't cluster or only input is only one RDB.
    # 拉取的并发度,如果是`dump`或者`sync`,默认是source.address中db的个数,`restore`模式默认len(source.rdb.input)。
    # 假如db节点/输入的rdb有5个,但rdb.parallel=3,那么一次只会
    # 并发拉取3个db的全量数据,直到某个db的rdb拉取完毕并进入增量,才会拉取第4个db节点的rdb,
    # 以此类推,最后会有len(source.address)或者len(rdb.input)个增量线程同时存在。
    source.rdb.parallel = 0
    # for special cloud vendor: ucloud
    # used in `decode` and `restore`.
    # ucloud集群版的rdb文件添加了slot前缀,进行特判剥离: ucloud_cluster。
    source.rdb.special_cloud = 
    
    # target redis configuration. used in `restore`, `sync` and `rump`.
    # the type of target redis can be "standalone", "proxy" or "cluster".
    #   1. "standalone": standalone db mode.
    #   2. "sentinel": the redis address is read from sentinel.
    #   3. "cluster": open source cluster (not supported currently).
    #   4. "proxy": proxy layer ahead redis. Data will be inserted in a round-robin way if more than 1 proxy given.
    # 目的redis的类型,支持standalone,sentinel,cluster和proxy四种模式。
    target.type = cluster
    # ip:port
    # the target address can be the following:
    #   1. single db address. for "standalone" type.
    #   2. ${sentinel_master_name}:${master or slave}@sentinel single/cluster address, e.g., mymaster:master@127.0.0.1:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for "sentinel" type.
    #   3. cluster that has several db nodes split by semicolon(;). for "cluster" type.
    #   4. proxy address(used in "rump" mode only). for "proxy" type.
    target.address = target_cluster-01_ip:6379;target_cluster-02_ip:6379;target_cluster-03_ip:6379
    # password of db/proxy. even if type is sentinel.
    target.password_raw = AComCdgN09srE
    # auth type, don't modify it
    target.auth_type = auth
    # all the data will be written into this db. < 0 means disable.
    target.db = -1
    # tls enable, true or false. Currently, only support standalone.
    # open source redis does NOT support tls so far, but some cloud versions do.
    target.tls_enable = false
    # output RDB file prefix.
    # used in `decode` and `dump`.
    # 如果是decode或者dump,这个参数表示输出的rdb前缀,比如输入有3个db,那么dump分别是:
    # ${output_rdb}.0, ${output_rdb}.1, ${output_rdb}.2
    target.rdb.output = local_dump
    # some redis proxy like twemproxy doesn't support to fetch version, so please set it here.
    # e.g., target.version = 4.0
    target.version =
    
    # use for expire key, set the time gap when source and target timestamp are not the same.
    # 用于处理过期的键值,当迁移两端不一致的时候,目的端需要加上这个值
    fake_time =
    
    # force rewrite when destination restore has the key
    # used in `restore`, `sync` and `rump`.
    # 当源目的有重复key,是否进行覆写
    rewrite = true
    
    # filter db, key, slot, lua.
    # filter db.
    # used in `restore`, `sync` and `rump`.
    # e.g., "0;5;10" means match db0, db5 and db10.
    # at most one of `filter.db.whitelist` and `filter.db.blacklist` parameters can be given.
    # if the filter.db.whitelist is not empty, the given db list will be passed while others filtered.
    # if the filter.db.blacklist is not empty, the given db list will be filtered while others passed.
    # all dbs will be passed if no condition given.
    # 指定的db被通过,比如0;5;10将会使db0, db5, db10通过, 其他的被过滤
    filter.db.whitelist =
    # 指定的db被过滤,比如0;5;10将会使db0, db5, db10过滤,其他的被通过
    filter.db.blacklist =
    # filter key with prefix string. multiple keys are separated by ';'.
    # e.g., "abc;bzz" match let "abc", "abc1", "abcxxx", "bzz" and "bzzwww".
    # used in `restore`, `sync` and `rump`.
    # at most one of `filter.key.whitelist` and `filter.key.blacklist` parameters can be given.
    # if the filter.key.whitelist is not empty, the given keys will be passed while others filtered.
    # if the filter.key.blacklist is not empty, the given keys will be filtered while others passed.
    # all the namespace will be passed if no condition given.
    # 支持按前缀过滤key,只让指定前缀的key通过,分号分隔。比如指定abc,将会通过abc, abc1, abcxxx
    filter.key.whitelist =
    # 支持按前缀过滤key,不让指定前缀的key通过,分号分隔。比如指定abc,将会阻塞abc, abc1, abcxxx
    filter.key.blacklist =
    # filter given slot, multiple slots are separated by ';'.
    # e.g., 1;2;3
    # used in `sync`.
    # 指定过滤slot,只让指定的slot通过
    filter.slot =
    # filter lua script. true means not pass. However, in redis 5.0, the lua 
    # converts to transaction(multi+{commands}+exec) which will be passed.
    # 控制不让lua脚本通过,true表示不通过
    filter.lua = false
    
    # big key threshold, the default is 500 * 1024 * 1024 bytes. If the value is bigger than
    # this given value, all the field will be spilt and write into the target in order. If
    # the target Redis type is Codis, this should be set to 1, please checkout FAQ to find 
    # the reason.
    # 正常key如果不大,那么都是直接调用restore写入到目的端,如果key对应的value字节超过了给定
    # 的值,那么会分批依次一个一个写入。如果目的端是Codis,这个需要置为1,具体原因请查看FAQ。
    # 如果目的端大版本小于源端,也建议设置为1。
    big_key_threshold = 524288000
    
    # use psync command.
    # used in `sync`.
    # 默认使用psync命令进行同步,置为false将会用sync命令进行同步,代码层面会自动识别2.8以前的版本改为sync。
    psync = true
    
    # enable metric
    # used in `sync`.
    # 是否启用metric
    metric = true
    # print in log
    # 是否将metric打印到log中
    metric.print_log = false
    
    # sender information.
    # sender flush buffer size of byte.
    # used in `sync`.
    # 发送缓存的字节长度,超过这个阈值将会强行刷缓存发送
    sender.size = 104857600
    # sender flush buffer size of oplog number.
    # used in `sync`. flush sender buffer when bigger than this threshold.
    # 发送缓存的报文个数,超过这个阈值将会强行刷缓存发送,对于目的端是cluster的情况,这个值
    # 的调大将会占用部分内存。
    sender.count = 4095
    # delay channel size. once one oplog is sent to target redis, the oplog id and timestamp will also
    # stored in this delay queue. this timestamp will be used to calculate the time delay when receiving
    # ack from target redis.
    # used in `sync`.
    # 用于metric统计时延的队列
    sender.delay_channel_size = 65535
    
    # enable keep_alive option in TCP when connecting redis.
    # the unit is second.
    # 0 means disable.
    # TCP keep-alive保活参数,单位秒,0表示不启用。
    keep_alive = 0
    
    # used in `rump`.
    # number of keys captured each time. default is 100.
    # 每次scan的个数,不配置则默认100.
    scan.key_number = 50
    # used in `rump`.
    # we support some special redis types that don't use default `scan` command like alibaba cloud and tencent cloud.
    # 有些版本具有特殊的格式,与普通的scan命令有所不同,我们进行了特殊的适配。目前支持腾讯云的集群版"tencent_cluster"
    # 和阿里云的集群版"aliyun_cluster"。
    scan.special_cloud =
    # used in `rump`.
    # we support to fetching data from given file which marks the key list.
    # 有些云版本,既不支持sync/psync,也不支持scan,我们支持从文件中进行读取所有key列表并进行抓取:一行一个key。
    scan.key_file =
    
    # limit the rate of transmission. Only used in `rump` currently.
    # e.g., qps = 1000 means pass 1000 keys per second. default is 500,000(0 means default)
    qps = 200000
    
    # ----------------splitter----------------
    # below variables are useless for current open source version so don't set.
    
    # replace hash tag.
    # used in `sync`.
    replace_hash_tag = false

    3、启动

    ./redis-shake.linux -conf=redis-shake.conf -type=xxx  #xxx为sync、restore、dump、decode、rump,全量+增量为"sync"

    4、数据的导入和导出

    数据导出:
    ./redis-shake.linux -conf=redis-shake.conf -type=dump
    
    数据导入:
    ./redis-shake.linux -conf=redis-shake.conf -type=restore
    注意:
    数据导入的时候需要配置要导入的类型与数据源
    source.rdb.input = local_dump.0

    5、通过redis-full-check验证数据同步完整性源端或目标端为集群的话,每个集群地址通过(;)分割,前后都加上双引号("")

    github:https://github.com/alibaba/RedisFullChec

    5.1 下载并安装

    wget -c https://github.com/alibaba/RedisFullCheck/releases/download/release-v1.4.7-20191203/redis-full-check-1.4.7.tar.gz
    
    tar -zxvf redis-full-check-1.4.7.tar.gz && cd redis-full-check-1.4.7.tar.gz

    5.2 执行命令校验

    ./redis-full-check -s source_ip:6381 -p "AComCdgN09srE" -t "target_cluster-01_ip:6379;target_cluster-02_ip:6379;target_cluster-03_ip:6379" -a "AComCdgN09srE" --comparemode=1 --comparetimes=1 --qps=10 --batchcount=100 --sourcedbtype=0 --targetdbfilterlist=0 --targetdbtype=1
    注意:
    如果目标是集群的话,需要指定--targetdbtype 类型为1,源端是独立节点的话,需要指定--sourcedbtype=0

    5.3 校验结果:

    [INFO 2021-12-28-16:05:14 full_check.go:328]: --------------- finished! ----------------
    all finish successfully, totally 0 key(s) and 0 field(s) conflict

    5.4 执行查看命令:

    sqlite3 result.db.1

    5.5 查看异常表数据:

    select * from key;

    6、更多参数详解通过 ./redis-full-check --help 查看

    部分参数详解:
    -s 源端Redis的连接地址和端口,如果源端Redis为集群,每个集群地址间需要以半角分号(;)分割不同的连接地址,集群地址前后需要添加半角双引号("),该选项必填。
    -p 源端Redis的密码
    -t 目的端Redis的连接地址和端口,如果目的Redis为集群版,每个集群地址间需要以半角分号(;)分割不同的连接地址,集群地址前后需要添加半角双引号("),该选项必填。
    -a 目的端Redis的密码。
    --sourcedbtype  源库的类别: 0:单节点版、主从版;1:集群版;2:阿里云/腾讯云;例如:--sourcedbtype=1
    --sourcedbfilterlist  源端Redis指定需要校验的DB;开源集群版Redis无需填写该选项;非开源集群版Redis不指定该选项表示校验所有DB;多个DB之间使用半角分号(;)连接;例如:--sourcedbfilterlist=0;1;2
    --targetdbtype  目的库的类别:0:单节点版、主从版;1:集群版;2:阿里云/腾讯云 例如:--targetdbtype=0
    --targetdbfilterlist  目的端Redis指定需要校验的DB;开源集群版Redis无需填写该选项;非开源集群版Redis不指定该选项表示校验所有DB;多个DB之间使用半角分号(;)连接;例如:--targetdbfilterlist=0;1;2
    -d  异常数据列表保存的文件名称,默认为result.db
    --comparetimes  校验次数:该选项不填则默认为3次;最小值为1;无最大值,建议不超过5次;--comparetimes=1
    -m  校验模式:1:全量校验;2:仅校验value的长度;3:仅校验key是否存在;4:全量对比的情况下,忽略大key的比较
    -qps  限速阈值  说明:最小值为1;最大值取决于服务器性能; 例如:--qps=10
    --filterlist  需要比较的key列表,以竖线(|)分割;abc*:表示匹配所有abc开头的key;abc:表示仅匹配abc这个key ; 例如:--filterlist=abc*|efg|m*
  • 相关阅读:
    自动打包脚本
    Tomcat内存溢出问题
    Nginx笔记总结二十:nginx索引目录配置
    Nginx笔记总结十九:nginx + fancy实现漂亮的索引目录
    Nginx笔记总结十八:nginx统计响应的http状态码信息(ngx-http-status-code-counter)
    Nginx笔记总结十七:nginx生成缩略图配置(http_image_filter_module)
    Nginx笔记总结十六:nginx优化指南
    Nginx笔记总结十五:nginx+keepalive+proxy_cache配置高可用nginx集群和高速缓存
    Nginx笔记总结十四: nginx反向代理,用内网域名转发
    Nginx笔记总结十三:nginx 正向代理
  • 原文地址:https://www.cnblogs.com/hankyoon/p/15741329.html
Copyright © 2011-2022 走看看